← Laravel & PHP Frameworks

When Eloquent's with() Makes Things Worse: Eager Loading Anti-Patterns on PostgreSQL

You solved the N+1 problem. You may have introduced three new ones. Here is what the wreckage looks like.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 28 min read
One thousand queries arrived unannounced. The illustration is being eagerly reassembled.

Good evening. I see you have discovered eager loading.

Congratulations are in order. You identified the N+1 query problem in your Laravel application, added with('comments') to your Eloquent query, and watched your 201 queries collapse into 2. Response time dropped. Your monitoring turned green. You moved on.

That was the right instinct. Eager loading is the textbook solution to the N+1 problem, and it works beautifully — in the textbook scenario. Twenty posts with ten comments each. A blog tutorial. A code review demo.

Production is not a blog tutorial. In production, you have 1,000 posts. Each has 47 comments on average. The comments table has a 4 KB body column, a 2 KB metadata JSONB column, and eleven other fields your view template never touches. Your eager load pulls every single byte of it.

The N+1 problem is a well-understood villain. What is less discussed — and what I encounter far more often in mature Laravel applications — are the anti-patterns that eager loading itself introduces when applied without constraint.

There are three, and they compound. But first, a moment of honesty.

An honest disclaimer before we begin

Eager loading is not wrong. It is one of the genuinely good ideas in ORM design. The alternative — lazy loading every relationship — produces the N+1 problem, which at scale is catastrophically worse than anything I am about to describe. If you are choosing between "no eager loading at all" and "eager loading everywhere," choose the latter. You will at least be in the right neighbourhood.

What follows is not an argument against with(). It is an argument for precision in how you use it. The difference between a well-applied eager load and a careless one is not marginal — it is, as the numbers will show, two orders of magnitude in every measurable dimension. But the careless version is still better than N+1.

I raise this because I have seen developers read articles like this one, panic, and rip out all their eager loads. That is exactly the wrong response. The correct response is to examine which loads are doing useful work and which are performing at the level of — if you will forgive me — a footman who polishes the silverware regardless of whether dinner has been cancelled.

Anti-pattern 1: The unbounded IN list

When you call Post::with('comments')->get(), Eloquent performs two queries. The first fetches the posts. The second fetches all comments for those posts using a WHERE post_id IN (...) clause containing every post ID from the first query.

The standard eager load
// A controller that fetches posts for an admin dashboard.
// Looks clean. Ships on Friday. Starts timing out on Tuesday.

$posts = Post::with('comments')
    ->where('published', true)
    ->get();

// On 1,000 published posts, Eloquent generates:

// Query 1: SELECT * FROM "posts" WHERE "published" = true
// Query 2: SELECT * FROM "comments"
//          WHERE "post_id" IN (1, 2, 3, 4, 5, ... 1000)

// That second query sends a 1,000-element IN list to PostgreSQL.
// Every column on the comments table comes back — including
// the 4 KB body TEXT and the metadata JSONB blob.

On 50 posts, that IN list has 50 elements. PostgreSQL handles this effortlessly — the optimizer evaluates each element, considers the available indexes, and picks the fastest path. Planning time: under 1ms.

On 1,000 posts, that IN list has 1,000 elements. The optimizer's cost model changes. PostgreSQL evaluates each element against cpu_tuple_cost (default: 0.01). Past roughly 100 elements, the planner frequently concludes that evaluating the IN list against an index is more expensive than a sequential scan of the entire table. So it scans.

Here is what that looks like in EXPLAIN ANALYZE:

EXPLAIN ANALYZE — 1,000-element IN list, no index
-- EXPLAIN ANALYZE on the eager-loaded comments query
-- 1,000 post IDs, comments table has 47,000 rows, no index on post_id

EXPLAIN ANALYZE
SELECT * FROM "comments"
WHERE "post_id" IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
    -- ... 990 more values ...
    991, 992, 993, 994, 995, 996, 997, 998, 999, 1000);

QUERY PLAN
--------------------------------------------------------------------
 Seq Scan on comments
   (cost=0.00..4218.50 rows=47000 width=1847)
   (actual time=0.031..892.441 rows=47000 loops=1)
   Filter: (post_id = ANY ('{1,2,3,...,1000}'::integer[]))
   Rows Removed by Filter: 0
 Planning Time: 48.217 ms
 Execution Time: 1024.893 ms

-- Two things to notice:
-- 1. Planning time: 48ms. PostgreSQL's optimizer evaluates each
--    element against cpu_tuple_cost. Past ~100 elements, it often
--    abandons index consideration and falls back to a sequential scan.
-- 2. width=1847 bytes per row. That is SELECT * pulling the full
--    comment body and metadata JSONB. 47,000 rows x 1,847 bytes
--    = 86 MB transferred over the wire.

Two numbers tell the story. Planning time: 48ms — the optimizer spent longer deciding what to do than a well-indexed query takes to execute. And width=1847: each row carries 1,847 bytes, including the full comment body and metadata JSONB. Across 47,000 rows, that is 86 MB over the wire.

What the optimizer is actually doing

The PostgreSQL optimizer cost documentation explains the mechanics, though you have to piece it together from several pages. The planner multiplies the number of IN-list elements by cpu_operator_cost (default: 0.0025) for each row considered. A 1,000-element IN list on a 47,000-row table means 47 million cost evaluations before the first row is returned.

That 48ms planning time is not idle deliberation. PostgreSQL is computing the estimated cost of an index scan — evaluating each of the 1,000 values against the index selectivity statistics — and comparing it to the cost of a sequential scan. The sequential scan cost is simple: read every page, check every tuple. The index scan cost scales with the IN-list size. Past a threshold that depends on table statistics and index correlation, the sequential scan wins on estimated cost.

The irony is that the index scan would have been faster in wall-clock time. But the optimizer does not know that — it works from statistics, not prophecy. And the statistics say that evaluating 1,000 individual index lookups, with potential random I/O for each, is more expensive than one linear scan. On spinning disks, that estimate would often be correct. On SSDs, it almost never is. The default cost parameters were calibrated for an era of rotational storage, and most PostgreSQL installations have not adjusted them.

You can nudge the optimizer by lowering random_page_cost from its default of 4.0 to something closer to 1.1 for SSD storage. But that is a server-wide setting that affects every query, and adjusting it for one eager-load pattern risks destabilizing plans elsewhere. The more surgical fix is to ensure the index exists and the column selection is narrow.

Compare the unconstrained query to one with a proper index and column selection:

EXPLAIN ANALYZE — with index and column selection
-- Same query, but with an index on comments.post_id
-- and selecting only the columns we need:

EXPLAIN ANALYZE
SELECT id, post_id, author_name, created_at
FROM "comments"
WHERE "post_id" IN (1, 2, 3, ... 1000);

QUERY PLAN
--------------------------------------------------------------------
 Index Scan using idx_comments_post_id on comments
   (cost=0.29..1842.16 rows=47000 width=52)
   (actual time=0.024..38.712 rows=47000 loops=1)
   Index Cond: (post_id = ANY ('{1,2,3,...,1000}'::integer[]))
 Planning Time: 12.841 ms
 Execution Time: 44.218 ms

-- From 1,024ms to 44ms. Two changes:
-- 1. Index on post_id lets PostgreSQL skip the sequential scan
-- 2. Selecting 4 columns instead of * drops width from 1,847 to 52 bytes
--    Wire transfer drops from 86 MB to 2.4 MB

From 1,024ms to 44ms. Same data. Same IN list. The difference is an index and four columns instead of eleven.

The unnest() alternative

There is another approach that sidesteps the IN-list cost model entirely. Instead of sending WHERE post_id IN (1, 2, 3, ... 1000), you can unnest the array into a virtual table and join against it:

unnest() + JOIN — avoiding IN-list evaluation
-- Alternative: unnest the IN list into a subquery join
-- This gives the optimizer a different cost path to evaluate

EXPLAIN ANALYZE
SELECT c.id, c.post_id, c.author_name, c.created_at
FROM "comments" c
JOIN unnest(ARRAY[1, 2, 3, ... 1000]) AS pid(id) ON c.post_id = pid.id;

QUERY PLAN
--------------------------------------------------------------------
 Hash Join
   (cost=27.50..2104.22 rows=47000 width=52)
   (actual time=0.412..31.884 rows=47000 loops=1)
   Hash Cond: (c.post_id = pid.id)
   -> Seq Scan on comments c
        (cost=0.00..1547.00 rows=47000 width=52)
        (actual time=0.008..8.241 rows=47000 loops=1)
   -> Hash
        (cost=15.00..15.00 rows=1000 width=4)
        (actual time=0.381..0.381 rows=1000 loops=1)
          -> Function Scan on unnest pid
               (cost=0.00..15.00 rows=1000 width=4)
               (actual time=0.142..0.228 rows=1000 loops=1)
 Planning Time: 1.247 ms
 Execution Time: 35.112 ms

-- Planning time drops from 48ms to 1.2ms.
-- The optimizer treats unnest() as a virtual table, not a list
-- of individual equality checks. Hash Join replaces ANY evaluation.

Planning time drops from 48ms to 1.2ms. The optimizer treats unnest() as a table, not a list of comparisons. It builds a hash of the 1,000 values and probes it once per comments row — a fundamentally different algorithm than evaluating ANY for each element.

Eloquent does not generate this pattern natively. You would need a raw query or a scope that uses DB::raw(). For most applications, the simpler path is adding the index, constraining the columns, and keeping the IN list under 500 elements through chunking. But if you have a use case that genuinely requires thousands of IDs in a single query — a bulk export, a data migration, a reporting pipeline — the unnest() join is worth knowing about.

Anti-pattern 2: SELECT * on eager-loaded relationships

Eloquent's with() defaults to SELECT * on the related table. Every column. Every row. Every time.

This matters less when your related table has five narrow columns. It matters enormously when it has TEXT fields, JSONB blobs, or binary data. The comments table in our example carries body TEXT (average 4 KB) and metadata JSONB (average 2 KB). Those two columns account for 96% of the per-row byte cost.

If your view only displays the author name and timestamp — which is the common case for a post listing page — you are transferring 275 MB to display data that fits in 2.4 MB.

Constraining columns on eager loads
// Anti-pattern: with() loads all columns by default
$posts = Post::with('comments')->get();
// SELECT * FROM comments WHERE post_id IN (...)
// Transfers: id, post_id, body (TEXT, avg 4KB), metadata (JSONB, avg 2KB),
//            author_name, author_email, ip_address, user_agent,
//            created_at, updated_at, deleted_at
// Per row: ~6 KB. For 47,000 comments: ~275 MB in PHP memory.

// Fix: constrain the eager load to the columns you actually use
$posts = Post::with(['comments' => function ($query) {
    $query->select('id', 'post_id', 'author_name', 'created_at');
}])->get();
// Per row: ~52 bytes. For 47,000 comments: ~2.4 MB.
// Same data you display in the UI. 99% less memory.

// Fix: or use withCount() when you only need the number
$posts = Post::withCount('comments')->get();
// SELECT posts.*, (SELECT count(*) FROM comments
//   WHERE comments.post_id = posts.id) AS comments_count
// FROM posts WHERE published = true
// No comments loaded at all. One integer per post.

The constrained eager load is a one-line change. The impact is not marginal — it is a 99% reduction in memory consumption and wire transfer.

The foreign key trap

There is a critical detail that I must emphasize, because it trips up roughly half the developers I see attempting this optimization for the first time: when constraining columns in a closure-based eager load, you must include the foreign key column (post_id in this case). Eloquent needs it to match comments back to their parent posts. Omit it and every post's comments relationship will be an empty collection. No error. No warning. Just silent, bewildering emptiness.

This is worth a rule: always include id and the foreign key in constrained selects. Everything else is negotiable. Those two are not.

TOAST: the hidden cost of SELECT *

The wire transfer cost is visible in EXPLAIN ANALYZE. What is less visible — and often more expensive — is the TOAST decompression cost on the PostgreSQL side.

When a TEXT or JSONB value exceeds approximately 2 KB, PostgreSQL stores it out-of-line in a TOAST table. The main heap row contains a pointer. When you SELECT *, PostgreSQL follows every pointer, decompresses every value, and returns the full content. When you select only non-TOASTed columns, it never touches the TOAST table at all.

TOAST storage — the invisible cost of SELECT *
-- When body TEXT exceeds ~2 KB, PostgreSQL stores it in TOAST.
-- SELECT * forces PostgreSQL to decompress every TOASTed value.

-- To see TOAST activity on your comments table:
SELECT
    relname,
    n_tup_fetch AS heap_fetches,
    pg_size_pretty(pg_relation_size(oid)) AS table_size,
    pg_size_pretty(pg_total_relation_size(oid) - pg_relation_size(oid))
        AS toast_and_index_size
FROM pg_stat_user_tables
JOIN pg_class ON pg_class.relname = pg_stat_user_tables.relname
WHERE relname = 'comments';

--  relname  | heap_fetches | table_size | toast_and_index_size
-- ----------+--------------+------------+---------------------
--  comments |      4700000 |     12 MB  |        247 MB
--
-- The table itself is 12 MB. The TOAST data is 247 MB.
-- SELECT * on this table decompresses 247 MB of TOAST for
-- every eager load. SELECT id, post_id, author_name, created_at
-- never touches TOAST at all.

In this example, the TOAST data is 20 times larger than the table itself. Every unconstrained eager load forces PostgreSQL to decompress 247 MB of out-of-line storage. A constrained select on narrow columns bypasses TOAST entirely. The savings are not just in wire transfer — they are in CPU time, I/O, and shared buffer pressure on the PostgreSQL server.

I have observed production systems where the comments table was 400 MB on disk and the TOAST table was 8 GB. Every with('comments') call was quietly decompressing gigabytes of TOAST data that the application discarded after hydration. The developers were investigating network latency. The network was fine. The database was doing eight times more work than anyone realized.

withCount(): when numbers are all you need

And when you only need a count — which is surprisingly often — withCount() eliminates the relationship load entirely:

withCount() — when you only need numbers
// When you only need counts, withCount() is categorically better.
// See: /grounds/laravel-php/eloquent-withcount-postgres-performance

$posts = Post::withCount(['comments', 'tags'])
    ->where('published', true)
    ->orderBy('comments_count', 'desc')
    ->get();

// PostgreSQL handles this as correlated subqueries:
// SELECT "posts".*,
//   (SELECT count(*) FROM "comments"
//    WHERE "comments"."post_id" = "posts"."id") AS "comments_count",
//   (SELECT count(*) FROM "post_tag"
//    WHERE "post_tag"."post_id" = "posts"."id") AS "tags_count"
// FROM "posts"
// WHERE "published" = true
// ORDER BY "comments_count" DESC

// One query. No IN list. No relationship data transferred.
// The database does the counting where it is fastest.

One query. No IN list. No relationship data transferred. The database does the counting where it is fastest — in the engine, close to the data, without serialization overhead. For a deeper treatment of how PostgreSQL handles these correlated subqueries and when they degrade, see the Eloquent withCount() guide.

Anti-pattern 3: The $with property

This is the one that causes the most damage in practice, because it is invisible at the call site.

Laravel's Eloquent models support a $with property — an array of relationships that are eager-loaded on every query against that model. The intention is convenience: define it once, never worry about N+1 queries again.

The reality is that it fires on every query. Every find(). Every all(). Every where()->get(). Every Nova resource index. Every Artisan command. Every test. Every API endpoint. Every queue job. You wanted eager loading on one page. You got it everywhere.

The $with property — global eager loading
// app/Models/Post.php
// The $with property: eager loading on every single query, forever.

class Post extends Model
{
    // This loads comments on EVERY query that touches the Post model.
    // Every. Single. One.
    protected $with = ['comments'];

    public function comments()
    {
        return $this->hasMany(Comment::class);
    }

    public function author()
    {
        return $this->belongsTo(User::class);
    }

    public function tags()
    {
        return $this->belongsToMany(Tag::class);
    }
}

// Now consider what happens throughout your application:

// Dashboard: "show me post count by status"
Post::selectRaw('status, count(*) as total')
    ->groupBy('status')
    ->get();
// You wanted 3 rows. You also got 47,000 comments loaded
// into memory. The $with property does not care about your intent.

// API endpoint: "check if post 42 exists"
Post::find(42);
// You wanted a boolean. You got the post AND all its comments.

// Artisan command: "update all post slugs"
Post::all()->each(function ($post) {
    $post->update(['slug' => Str::slug($post->title)]);
});
// For 1,000 posts, this loads 47,000 comments into memory
// just to update a slug field that has nothing to do with comments.

The Laravel eager loading documentation describes $with as a way to "always eager load certain relationships." What it does not emphasize is the blast radius. Every query that touches the model inherits the eager load, including queries from packages, admin panels, and background jobs that have nothing to do with the original use case.

The Nova cascade

The Nova issue referenced above — Nova #246 — showed this cascading in production: a model with $with loading three relationships, Nova internally adding its own eager loads for media, actions, and metrics. The result was 12 queries per page load on a 25-row admin table. Each query carried its own IN list. The page took 4 seconds.

The Nova cascade
// Laravel Nova admin panels: where $with goes to cascade.
//
// Nova issue #246 demonstrated this pattern in production:
// A Post model with $with = ['comments', 'tags', 'author']
// Nova's resource index called Post::with() internally,
// stacking its own eager loads on top of the model's $with.

// The result for a simple admin table listing:
// Query 1: SELECT * FROM posts ORDER BY id DESC LIMIT 25
// Query 2: SELECT * FROM comments WHERE post_id IN (1..25)
// Query 3: SELECT * FROM tags INNER JOIN post_tag ...
// Query 4: SELECT * FROM users WHERE id IN (...)
// Query 5: SELECT * FROM media WHERE model_id IN (...)  -- Nova media
// Query 6: SELECT * FROM actions WHERE actionable_id IN (...)
// ... up to 12+ queries for a 25-row admin table.
//
// Each relationship in $with triggers its own IN-list query.
// Nova adds its own. Packages add theirs. Nobody sees the total
// until the page takes 4 seconds to load.

Nova is not the only framework that adds its own eager loads. Filament, Backpack, and Livewire components all interact with models in ways that compound with $with. Any package that calls Model::query() inherits the full $with chain. The developer who added the property has no control over — and often no visibility into — how packages use the model.

The nested $with chain

The damage multiplies when $with appears on multiple models in a relationship chain. Each model's $with triggers when that model is loaded as a relationship, creating a cascade that no single developer intended:

Nested $with — the chain reaction
// The $with property compounds with nested relationships.
// Each level adds another IN-list query per parent.

class Post extends Model
{
    protected $with = ['comments'];
}

class Comment extends Model
{
    protected $with = ['author', 'reactions'];
}

class Author extends Model
{
    protected $with = ['profile'];
}

// Post::where('published', true)->get() now generates:
//
// Query 1: SELECT * FROM "posts" WHERE "published" = true
// Query 2: SELECT * FROM "comments" WHERE "post_id" IN (1..1000)
// Query 3: SELECT * FROM "users" WHERE "id" IN (...)      -- comment authors
// Query 4: SELECT * FROM "reactions" WHERE "comment_id" IN (...)
// Query 5: SELECT * FROM "profiles" WHERE "user_id" IN (...)
//
// Five queries. Four IN lists. Three levels of nesting.
// Nobody wrote with('comments.author.profile') —
// the $with chain assembled itself across model files.
// The developer who added $with = ['profile'] to the Author
// model has no idea it fires on the blog listing page.

This is the pattern that produces the truly astonishing query counts — 15, 20, 30 queries on a single page load — that appear in performance audits. The developer who added $with = ['profile'] to the Author model was solving an N+1 on the author profile page. They had no reason to consider that Author models are also loaded as comment relationships, which are loaded as post relationships, which fire on the blog listing page. The $with chain assembled itself across three files, and nobody saw the total until production slowed to a crawl.

Fixing the $with property

The cleanest fix: remove relationships from $with entirely and load them explicitly where needed. This is the approach I recommend for any model that appears in more than two contexts.

If removing $with is impractical — perhaps the model is used in 40 places and most of them need the relationship — Laravel provides escape hatches:

Bypassing $with on specific queries
// Surgical fix: disable $with for specific queries
// when you do not need the relationships.

// Option 1: without() — remove specific eager loads
$posts = Post::without('comments')
    ->where('status', 'draft')
    ->get();
// No comments loaded. The $with property is bypassed for this query.

// Option 2: withOnly() — replace $with entirely
$posts = Post::withOnly('author')
    ->where('published', true)
    ->get();
// Only loads the author relationship, ignoring everything in $with.

// Option 3: setEagerLoads([]) — nuclear option
$query = Post::query();
$query->setEagerLoads([]);
$posts = $query->get();
// No eager loading at all. Clean slate.

without() arrived in Laravel 8. withOnly() in Laravel 9. If you are on an older version, setEagerLoads([]) on the query builder is the fallback.

The principle: eager loading should be opt-in per query, not opt-in globally with per-query overrides. The $with property inverts this, and the result is a model that drags its relationships behind it like a bridal train — impressive at first, increasingly burdensome as the application grows.

A better architecture: query scopes

The replacement for $with that I recommend most often is a set of named query scopes that declare their data needs explicitly. Each scope loads exactly the relationships required for its context:

Query scopes — explicit data loading per context
// A cleaner architecture: scopes that declare their data needs.
// Each scope loads exactly the relationships it requires.
// No global $with. No guessing.

class Post extends Model
{
    // No $with property at all.

    public function scopeForListing($query)
    {
        return $query->with(['comments' => function ($q) {
            $q->select('id', 'post_id', 'author_name', 'created_at');
        }]);
    }

    public function scopeForDetail($query)
    {
        return $query->with(['comments', 'tags', 'author']);
    }

    public function scopeForExport($query)
    {
        return $query->withCount('comments')
            ->without('tags', 'author');
    }

    public function scopeForAdmin($query)
    {
        return $query->withCount(['comments', 'tags']);
    }
}

// Usage is self-documenting:
$posts = Post::forListing()->where('published', true)->paginate(25);
$post = Post::forDetail()->findOrFail($id);
$data = Post::forExport()->cursor();
$rows = Post::forAdmin()->paginate(50);

// Each call site loads exactly what it needs. Nothing more.

This pattern has three advantages over $with. First, the data requirements are visible at the call site — you can read Post::forListing() and know exactly which relationships it loads without opening the model file. Second, each context loads only what it needs, so the admin page does not pay for the detail page's relationships. Third, new contexts start clean — adding a new API endpoint does not inherit a chain of eager loads from a property defined eighteen months ago for a different use case.

The trade-off is verbosity. You write Post::forListing() instead of Post::all(). That is not a trade-off I find particularly costly. Explicit beats implicit when the implicit version fires 47,000 unnecessary rows into memory.

Diagnosing eager loading issues

The challenge with eager loading anti-patterns is that they are invisible in the application code. The query that causes trouble is not in your controller or your Blade template — it is generated by Eloquent at runtime, behind an abstraction that was designed to spare you from thinking about SQL. An admirable goal that becomes a liability when you need to think about SQL.

There are two levels of diagnosis: development and production.

Diagnosing with Debugbar and query logging
// Diagnosing eager loading issues with Laravel Debugbar
// composer require barryvdh/laravel-debugbar --dev

// In config/debugbar.php, ensure:
// 'collectors' => [
//     'queries' => true,
//     'models'  => true,   // shows hydrated model counts
//     'memory'  => true,
// ]

// What to look for in the Debugbar Queries tab:
//
// 1. IN lists in WHERE clauses — count the elements
//    WHERE "post_id" IN (1, 2, 3, ... 847)
//    ^^ 847 elements. The optimizer is not enjoying this.
//
// 2. Duplicate relationship loads — same table queried multiple times
//    SELECT * FROM "comments" WHERE "post_id" IN (...)  -- 2.4ms
//    SELECT * FROM "comments" WHERE "post_id" IN (...)  -- 2.1ms
//    ^^ Same query, different call sites. $with strikes again.
//
// 3. The Models tab — shows hydrated model counts
//    Post: 25
//    Comment: 1,175
//    Tag: 87
//    User: 25
//    ^^ You loaded 1,175 Comment models for a 25-row table.
//       If the page only shows comment counts, that is waste.

// For production: use clockwork or query logging
\DB::listen(function ($query) {
    if (str_contains($query->sql, ' IN (') && $query->time > 50) {
        Log::warning('Slow IN-list query', [
            'sql' => $query->sql,
            'time' => $query->time,
            'bindings_count' => count($query->bindings),
        ]);
    }
});

In development, Laravel Debugbar is indispensable. The Queries tab shows every query with its execution time and the call stack that generated it. The Models tab shows how many models of each type were hydrated — and when you see 1,175 Comment models loaded for a 25-row admin table, the diagnosis is immediate.

In production, the DB::listen() callback lets you log slow IN-list queries without the overhead of a full debugging toolbar. Look for queries where the bindings count exceeds 200 and the execution time exceeds 50ms. Those are your eager loading hot spots.

One more tool worth knowing: Model::preventLazyLoading(), introduced in Laravel 9. It throws an exception on any lazy-loaded relationship, forcing you to declare with() explicitly. The intention is excellent — it catches N+1 queries during development. The risk is that developers respond to the exception by adding $with to the model instead of with() at the call site. One solves the problem. The other creates three new ones.

preventLazyLoading() — use it, but use it correctly
// Laravel 9+: preventLazyLoading() — the N+1 safety net
// Add to AppServiceProvider::boot()

Model::preventLazyLoading(!app()->isProduction());

// In development and testing, any lazy-loaded relationship
// throws an exception instead of silently executing a query.
// This forces you to declare with() explicitly on every query.
//
// The trap: developers respond to the exception by adding $with
// to the model — solving the N+1 but creating the IN-list problem.
// The correct response is with() at the call site, not $with on
// the model.

// Better: preventLazyLoading + handleLazyLoadingViolationUsing
// Log violations in production instead of crashing:
Model::preventLazyLoading();
Model::handleLazyLoadingViolationUsing(function ($model, $relation) {
    if (app()->isProduction()) {
        Log::warning('Lazy loading violation', [
            'model' => get_class($model),
            'relation' => $relation,
        ]);
    } else {
        throw new LazyLoadingViolationException($model, $relation);
    }
});

When with() helps vs. when it hurts

Eager loading is not inherently wrong. It is the correct solution to the N+1 problem. The question is whether you are applying it with precision or with a paint roller.

Scenariowith()Better approachWhy
Display posts with comment countsWastefulwithCount()You need a number, not 47,000 objects
Show post with its 5 latest commentsAcceptable (constrained)with() + select() + limit()Constrain columns and row count
List posts in admin tableHarmful in $withLoad per-route$with fires on Nova, API, CLI, tests...
API: return post with all commentsAppropriatewith() + select() + paginateStill constrain columns; paginate the relationship
Check if post existsPure wasteexists() or find() without()No relationship data needed whatsoever
Export 10,000 posts to CSVDangerouschunk() + constrained with()IN list over 1,000 degrades the optimizer; memory explodes
Real-time search/filterSlowDedicated query with JOINEager load overhead on every keystroke adds up
Queue job processing ordersContext-dependentchunkById() + constrained with()Background jobs process thousands; unbounded get() is reckless

The pattern: with() is appropriate when you actually need the related data, you constrain the columns, and you control the row count. It is harmful when used as a blanket, when it pulls columns you discard, or when the IN list grows past what the PostgreSQL optimizer handles gracefully.

Polymorphic relationships: eager loading's most awkward guest

Polymorphic relationships deserve special mention because they interact with eager loading in ways that multiply the query count per morph type. Where a standard belongsTo produces one IN-list query, a morphTo produces one per distinct type.

Polymorphic eager loading — queries multiply by type count
// Polymorphic relationships and eager loading:
// the type column means PostgreSQL cannot use a single index efficiently.

class Comment extends Model
{
    public function commentable()
    {
        return $this->morphTo();
    }
}

// with('commentable') on a polymorphic relation generates
// one query per type. If comments belong to Posts, Videos,
// and Photos, Eloquent sends:
//
// SELECT * FROM "comments" WHERE ...
// SELECT * FROM "posts" WHERE "id" IN (...)     -- type = 'App\Models\Post'
// SELECT * FROM "videos" WHERE "id" IN (...)    -- type = 'App\Models\Video'
// SELECT * FROM "photos" WHERE "id" IN (...)    -- type = 'App\Models\Photo'
//
// Three IN-list queries instead of one. Each hits a different table.
// The commentable_type column is a string — PostgreSQL compares it
// as text, not an enum, unless you have explicitly cast it.
//
// With $with = ['commentable'] on the Comment model, every Comment
// query generates 1 + N queries, where N is the number of distinct
// morph types. On a system with 6 commentable types, that is 7 queries
// per page load. Per page.

If your Comment model has $with = ['commentable'] and comments can belong to six different model types, every query that loads comments generates seven queries: one for the comments themselves, and one per morph type. Add that to a Post model with $with = ['comments'], and you have a chain where loading 25 posts triggers the comments load, which triggers six morph resolution queries. Eight queries total. For an admin table that displays post titles and comment counts.

The fix for polymorphic relationships is almost always withCount() or without(). If you need the actual commentable data, load it explicitly at the call site where you know which types you will encounter. Do not let $with resolve morphs globally — the cost scales with the number of types, and types only grow over time.

JOIN vs. eager load: when one query beats two

There are scenarios where the two-query pattern of eager loading — one for parents, one for children — is structurally slower than a single JOIN. This is particularly true when you need filtered relationships or when the result will be used for display only, without needing Eloquent's Collection methods.

JOIN vs. eager load — one round trip vs. two
// When you need filtered relationships, a JOIN is often
// faster than with() + closure constraints.

// Eager load approach: two queries
$posts = Post::with(['comments' => function ($query) {
    $query->where('approved', true)
          ->where('created_at', '>=', now()->subDays(7))
          ->select('id', 'post_id', 'author_name', 'created_at');
}])->where('published', true)->get();

// Query 1: SELECT * FROM "posts" WHERE "published" = true
// Query 2: SELECT id, post_id, author_name, created_at
//          FROM "comments"
//          WHERE "post_id" IN (1..1000)
//            AND "approved" = true
//            AND "created_at" >= '2026-03-04'

// JOIN approach: one query, one round trip
$posts = Post::query()
    ->join('comments', function ($join) {
        $join->on('comments.post_id', '=', 'posts.id')
             ->where('comments.approved', true)
             ->where('comments.created_at', '>=', now()->subDays(7));
    })
    ->where('posts.published', true)
    ->select('posts.*', 'comments.author_name', 'comments.created_at as comment_date')
    ->get();

// One query. One round trip. No IN list.
// The trade-off: you lose Eloquent's automatic Collection hydration
// of the relationship. $post->comments won't be a Collection —
// the comment data is flat on the Post model.
// For display-only use cases, that is perfectly acceptable.

The JOIN approach eliminates the IN list entirely. PostgreSQL handles the join with a single hash or merge join, using one round trip instead of two. Planning time is minimal because the optimizer evaluates one query, not two.

The trade-off is real: you lose Eloquent's automatic relationship hydration. $post->comments will not be a Collection — the comment data is flattened onto the post model. For API responses built with toArray() or Blade templates that iterate over $post->comments, this is a problem. For Vue/React frontends that receive JSON, for CSV exports, for reporting dashboards — the flattened data is often exactly what you need.

I am not suggesting you replace every eager load with a JOIN. That would be throwing away one of Eloquent's genuine strengths — the ability to work with nested data as objects. But when performance matters and the data is consumed flat, the JOIN is the right tool. Use both. Choose per context.

"The gap between what the ORM expresses and what PostgreSQL executes is where performance problems live. Not in the database. Not in the application. In the translation."

— from You Don't Need Redis, Chapter 3: The ORM Tax

Chunking: keeping IN lists within optimizer range

When you must process thousands of records with eager-loaded relationships, chunk() keeps the IN list size manageable:

Chunking to control IN-list size
// Anti-pattern: loading 10,000 posts with eager loading
$posts = Post::with('comments')->get();
// IN list with 10,000 elements. PostgreSQL will not be pleased.
// PHP memory: potentially gigabytes.

// Better: chunk the query
Post::with(['comments' => function ($query) {
    $query->select('id', 'post_id', 'author_name', 'created_at');
}])->chunk(200, function ($posts) {
    foreach ($posts as $post) {
        // Process each batch of 200 posts
        // IN list stays at 200 elements — well within optimizer range
    }
});

// Even better for write operations: chunkById
Post::chunkById(200, function ($posts) {
    // Uses WHERE id > ? LIMIT 200 instead of OFFSET
    // No IN list at all for the parent query
});

The magic number is somewhere between 100 and 500 elements. Below 100, the PostgreSQL optimizer reliably uses indexes on the IN-list target column. Above 500, planning time increases measurably and sequential scan probability rises. Above 1,000, you are likely paying more in planning time than the query itself costs to execute.

Laravel's chunk() method handles the pagination automatically. Each chunk generates its own eager load with a bounded IN list. Memory stays flat because each chunk is garbage-collected before the next one loads.

chunk() vs. cursor() vs. lazy(): choosing the right iteration

Laravel offers three iteration strategies for large result sets, and each interacts with eager loading differently:

Iteration strategies and eager loading compatibility
// cursor() — stream rows one at a time, no IN list
Post::where('published', true)->cursor()->each(function ($post) {
    // Each post is hydrated individually from a server-side cursor
    // Memory: one model at a time
    // Queries: one SELECT with a PostgreSQL cursor behind the scenes
    // But: no eager loading possible. Each $post->comments
    //      triggers a lazy load — back to N+1.
});

// The trade-off matrix:
// ┌─────────────────┬──────────┬────────────┬──────────────┐
// │ Method          │ Memory   │ IN list    │ Eager load?  │
// ├─────────────────┼──────────┼────────────┼──────────────┤
// │ get()           │ All rows │ All IDs    │ Yes          │
// │ chunk(200)      │ 200 rows │ 200 IDs    │ Yes          │
// │ chunkById(200)  │ 200 rows │ 200 IDs    │ Yes          │
// │ cursor()        │ 1 row    │ None       │ No           │
// │ lazy(200)       │ 200 rows │ 200 IDs    │ Yes          │
// └─────────────────┴──────────┴────────────┴──────────────┘
//
// lazy() (Laravel 8+) combines chunk's batching with
// cursor's iteration API. Best of both for read operations.

// For a deeper treatment of chunk vs cursor vs lazy:
// See: /grounds/laravel-php/laravel-chunk-cursor-lazy-postgres

The key insight: cursor() is incompatible with eager loading. It streams rows one at a time through a PostgreSQL cursor, which means Eloquent has no batch of IDs to build an IN list from. Any relationship access on a cursor-iterated model triggers a lazy load — and you are back to N+1.

lazy(), introduced in Laravel 8, is the synthesis: it batches like chunk() but returns a LazyCollection that you can iterate with each(), map(), and filter(). Eager loading works within each batch. For read-heavy operations on large datasets, it is the correct choice. For a comprehensive comparison with PostgreSQL-specific considerations, see the chunk, cursor, and lazy guide on PostgreSQL.

The documentation gap
// From the Laravel documentation on eager loading:
// https://laravel.com/docs/11.x/eloquent-relationships#eager-loading
//
// "When accessing Eloquent relationships as properties, the related
//  models are 'lazy loaded'. This means the relationship data is not
//  actually loaded until you first access the property."
//
// What the docs do not emphasize: the alternative — with() —
// loads EVERYTHING upfront, and $with loads it on EVERY query.
// The space between "lazy is bad" (N+1) and "eager is bad"
// (massive IN lists, wasted columns) is where the real work happens.

The gap the documentation leaves is the space between "lazy loading causes N+1" and "eager loading is the solution." The real question — how large is the IN list, how wide are the rows, how many columns does the view actually need — receives no treatment. The assumption is that eager loading is always better than lazy loading. Directionally, that is correct. In the details, it can be spectacularly wrong.

The numbers, summarized

For a concrete reference point — 1,000 posts, 47,000 comments, PostgreSQL 16:

ApproachQueriesPlanningExecutionWire transfer
with('comments') — no index, SELECT *248 ms1,024 ms86 MB
with('comments') — indexed, SELECT *214 ms218 ms86 MB
with() — indexed, constrained columns213 ms44 ms2.4 MB
unnest() JOIN — indexed, constrained columns11.2 ms35 ms2.4 MB
withCount('comments')10.4 ms8 ms0.02 MB
Worst to best2x120x128x4,300x

The difference between the worst case (unconstrained with(), no index) and withCount() is over two orders of magnitude in every dimension. These are not theoretical projections — they are measurements from a dataset that fits in 12 MB on disk. At production scale, the gaps widen.

I should note the unnest() JOIN row. It achieves nearly the same execution time as the constrained with() but with 40x less planning time. On a page that loads on every request — a listing page, a dashboard, an API endpoint — that planning time adds up. Three hundred requests per minute at 13ms planning each is 65 seconds of cumulative planning time per minute. At 1.2ms, it is 6 seconds. The optimizer's time is your server's time.

A checklist for eager loading on PostgreSQL

If you take away one practical tool from this article, let it be this. Before shipping any with() call to production, verify these five conditions:

  1. The foreign key column has an index. Check with \d comments in psql or SHOW INDEX FROM comments in your migration. Laravel does not create indexes on foreign keys by default — $table->foreignId('post_id') creates the column, but only $table->foreignId('post_id')->constrained() adds a foreign key constraint, and even that does not guarantee an index on all databases. Add one explicitly: $table->index('post_id').
  2. The eager load selects only the columns you use. If the Blade template shows {{ $comment->author_name }} and {{ $comment->created_at }}, constrain the query to those columns plus id and the foreign key.
  3. The parent query is bounded. Pagination, limit(), or a where clause that returns a known number of rows. An unbounded get() on a growing table is a time bomb — the IN list grows with the data.
  4. You actually need the relationship data. If the template shows a count, use withCount(). If it shows a boolean ("has comments?"), use withExists() (Laravel 10+). If it shows nothing from the relationship, use without().
  5. The model does not have $with for this relationship. If it does, your explicit with() is redundant — and if you add constraints, they will be merged with the unconstrained $with in ways that merit attention. Remove the relationship from $with first.

Five checks. Under a minute each. The difference between a query that returns in 44ms and one that takes 1,024ms — and the difference between 2.4 MB in memory and 275 MB.

Where Gold Lapel fits in

The anti-patterns above share a structural trait: the application sends queries that are correct but suboptimal, and the developer may not know until production load reveals the cost. The index is missing because nobody thought to add one on post_id — it seemed obvious in retrospect. The columns are over-fetched because Eloquent defaults to SELECT *. The $with property fires on queries the original developer never anticipated.

Gold Lapel sits between your Laravel application and PostgreSQL, observing the actual query traffic. For Eloquent eager loading patterns specifically, three things happen automatically:

Gold Lapel automatic optimization
-- What Gold Lapel sees in your Eloquent query traffic:

-- Pattern detected: IN-list query on comments.post_id
-- Frequency: 340 calls/hour
-- Avg IN-list size: 847 elements
-- Avg planning time: 41ms (optimizer struggling with list size)
-- Avg execution time: 892ms
-- Columns accessed downstream: id, post_id, author_name, created_at
--   (body, metadata, ip_address, user_agent never read by application)

-- Gold Lapel response:
-- 1. Auto-creates index on comments(post_id)
--    covering (id, author_name, created_at) — the columns your
--    application actually reads after the ORM hydrates the objects
-- 2. Query plan shifts from Seq Scan to Index Only Scan
-- 3. Planning time: 41ms → 4ms (index makes IN-list evaluation cheaper)
-- 4. Execution time: 892ms → 12ms
-- 5. Wire transfer: 86 MB → 2.4 MB (covering index skips heap access)

-- No query changes. No migration. No deploy.

The covering index is the key insight. Gold Lapel does not just index the column used in the WHERE clause — it observes which columns the application actually reads after hydration and includes them in the index as covered columns. The result is an Index Only Scan: PostgreSQL answers the query entirely from the index without touching the heap. The 4 KB body TEXT column and the 2 KB metadata JSONB blob never leave disk.

For repeated join patterns — the kind that the $with property creates, firing the same relationship loads hundreds of times per hour — Gold Lapel can materialize the results as a view, eliminating the IN-list evaluation entirely.

I should be forthright about the boundary. Gold Lapel optimizes the queries that reach PostgreSQL — it cannot change the queries themselves. If your $with property loads five relationships and you only need one, Gold Lapel makes all five faster, but you are still paying for four unnecessary round trips. The architectural fix — removing $with, using scopes, constraining columns — remains your responsibility. The proxy handles what the ORM cannot express. It does not replace the ORM decisions that only you can make.

The principle, restated

Eager loading solved the N+1 problem. It did not solve the "load everything, everywhere, unconditionally" problem. It introduced it.

The N+1 problem is dramatic — 201 queries, obvious in any monitoring tool, immediately alarming. The eager loading anti-patterns are subtle — 2 queries instead of 201, but those 2 queries carry 86 MB of data your application discards, inflate planning time by 48ms per request, and fire on every route that touches the model including the ones nobody remembers exist.

Fix the anti-patterns you can see. Remove $with where it does not belong. Constrain your eager loads to the columns your templates use. Add indexes on foreign keys. Chunk any query that could return more than a few hundred rows. Use withCount() when you need a number, not a collection. And for the patterns that slip through — the ones buried in package code, admin panels, and queue jobs you wrote eighteen months ago — let the proxy handle what the ORM cannot express.

The database was not slow. It was being asked to carry the entire household's luggage when only a briefcase was required.

Frequently asked questions

Terms referenced in this article

While the matter of polymorphic eager loading is fresh in your mind — the awkward guest mentioned above — I have written a dedicated guide to polymorphic associations and PostgreSQL performance that examines the indexing, type-column overhead, and alternative patterns that make the relationship less costly.