Filtering records by a list of primary keys is a common pattern in Eloquent:
$documents = Document::whereIntegerInRaw('documents.id', $documentIds)->get();
This works fine for a few hundred IDs. But once your $documentIds
array grows beyond a few thousand items, MySQL performance drops dramatically. Even though id
is indexed, a large IN (...)
clause becomes slow because the optimizer expands it into a huge internal structure and may stop using the index altogether.
Let’s look at a better way.
Why WHERE id IN (...)
becomes slow
When you pass thousands of IDs, MySQL must parse and sort them before matching against the index. For large lists, the optimizer may fall back to a full table scan, especially if it estimates that many rows will match. This quickly turns into seconds of runtime, even for simple lookups.
The trick: use a temporary table join
Instead of passing the entire ID list in the query, create a temporary table, insert your IDs into it, and join it with your main table. MySQL can optimize this join efficiently using indexed lookups.
Here’s the pattern:
// Create a temporary in-memory table
DB::statement('CREATE TEMPORARY TABLE temp_ids (id BIGINT PRIMARY KEY) ENGINE=Memory');
// Insert IDs in chunks
$ids?->unique()->chunk($chunkSize)->each(function ($chunk) use ($tableName) {
$values = $chunk->map(fn ($id) => ['id' => intval($id)])->toArray();
DB::table($tableName)->insert($values);
});
// Join with your main table
$documents = Document::join('temp_ids', 'documents.id', '=', 'temp_ids.id')->get();
That’s it — your lookup now uses a hash join on indexed data instead of scanning the entire documents table.
A reusable helper
To make this easier, you can wrap the logic in a small helper class that automatically chooses between whereIn
and the temporary-table join.
Create a helper at app/Support/TempTableJoin.php
:
namespace App\Support;
use Illuminate\Database\Eloquent\Builder;
use Illuminate\Support\Facades\DB;
class TempTableJoin
{
public static function apply(Builder $query, string $column, array $ids, int $threshold = 1000): Builder
{
if (empty($ids)) {
return $query->whereRaw('1 = 0');
}
if (count($ids) <= $threshold) {
return $query->whereIntegerInRaw($column, $ids);
}
$tempTable = 'temp_ids_' . uniqid();
DB::statement("CREATE TEMPORARY TABLE {$tempTable} (id BIGINT PRIMARY KEY) ENGINE=Memory");
foreach (array_chunk($ids, 1000) as $chunk) {
$values = implode(',', array_map('intval', $chunk));
DB::statement("INSERT INTO {$tempTable} (id) VALUES ($values)");
}
[$table, $col] = explode('.', $column);
return $query->join($tempTable, "{$table}.{$col}", '=', "{$tempTable}.id");
}
}
Then use it like this:
use App\Support\TempTableJoin;
$documents = TempTableJoin::apply(Document::query(), 'documents.id', $documentIds)->get();
This keeps your code clean and automatically scales for large ID lists.
Why this works
Temporary tables are session-scoped and stored in memory when using ENGINE=Memory
. MySQL treats the join as a simple index comparison (documents.id = temp_ids.id
), which scales linearly even for tens of thousands of IDs.
Practical notes
- Same connection: The temporary table exists only for the current connection. Within a single Laravel request, this is fine.
- Performance: Joining on 100k IDs is still nearly instant if both sides are indexed.
- Cleanup: The temporary table disappears automatically when the connection closes.
When to use this approach
Use it when:
- You need to filter by more than ~1000 IDs.
- The dataset is too large for multiple batched
IN
queries. - You need consistent joins or aggregations over those IDs.
For smaller ID lists, a regular whereIntegerInRaw()
is still simpler and fast enough.
Wrapping up
When your Eloquent query needs to filter on thousands of IDs, WHERE IN
will quickly hit MySQL’s limits.
Creating a temporary in-memory table and joining on it is an elegant, SQL-native optimization that scales cleanly without changing your application logic.
If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts, subscribe use the RSS feed.