← Laravel & PHP Frameworks

Eloquent's withCount() on PostgreSQL: Why It's Slow and How to Fix It

Your 25-row paginated query is executing 25,200 subqueries inside PostgreSQL. Allow me to explain why, and to suggest several less enthusiastic alternatives.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 28 min read
You asked us to count three bowls. We counted 25,200 individual grains instead.

Good evening. Your subqueries are multiplying.

There is a moment, familiar to every Laravel developer running PostgreSQL, when a page that displayed post counts in 80ms begins taking 4 seconds. The dataset has grown modestly. The code has not changed at all. The query is the same withCount() call that has been there since the feature was built.

The problem is not the code. The problem is what Eloquent generates from it.

withCount() is one of Eloquent's most convenient methods. Pass it an array of relationship names, and it appends a _count attribute to each model in the result. No eager loading, no extra queries, no manual SQL. It reads like a polite request: "while you are fetching these posts, would you also count their comments, likes, and tags?"

PostgreSQL, regrettably, does not receive a polite request. It receives three correlated scalar subqueries injected into the SELECT clause. For each row in the outer result set, Postgres re-executes every subquery from scratch. Chain withCount(['comments', 'likes', 'tags']) on a result set of 8,400 rows, and you have just asked PostgreSQL to run 25,200 individual aggregations.

I should note that this is not a bug. Not in Eloquent, and not in PostgreSQL. It is a collision between a framework designed for MySQL's optimizer and a database engine that takes your SQL at its word. MySQL has historically been forgiving of certain query patterns, memoizing correlated subquery results and sometimes flattening them into derived tables. PostgreSQL is more literal. It will execute what you wrote, exactly as you wrote it, with absolute fidelity. In most situations, this precision is a virtue. In this particular situation, it means 25,200 individual trips to the comments table for a paginated view showing 25 posts.

This is documented. Laravel framework discussion #45479 describes how orderBy on withCount values breaks on PostgreSQL entirely. The Eloquent documentation covers the syntax without mentioning the performance characteristics. Allow me to fill in that gap.

What withCount() actually generates

Consider a blogging platform. Posts have comments, likes, and tags (via a polymorphic taggables table). The dataset is unremarkable: 12,000 posts, 185,000 comments, 420,000 likes, 38,000 tag assignments. A mid-stage startup. Nothing exotic.

The Eloquent query
// A perfectly ordinary Laravel controller.
// Fetch posts with their comment, like, and tag counts.

$posts = Post::query()
    ->withCount(['comments', 'likes', 'tags'])
    ->where('published', true)
    ->orderBy('created_at', 'desc')
    ->paginate(25);

// Clean. Readable. Exactly what Eloquent was built for.
// And exactly where the trouble starts.

One line. Three relationship names. Here is the SQL that Eloquent sends to PostgreSQL:

Generated SQL
-- What Eloquent generates for withCount(['comments', 'likes', 'tags']):

SELECT
  "posts".*,
  (
    SELECT COUNT(*)
    FROM "comments"
    WHERE "comments"."post_id" = "posts"."id"
  ) AS "comments_count",
  (
    SELECT COUNT(*)
    FROM "likes"
    WHERE "likes"."post_id" = "posts"."id"
  ) AS "likes_count",
  (
    SELECT COUNT(*)
    FROM "taggables"
    INNER JOIN "tags" ON "tags"."id" = "taggables"."tag_id"
    WHERE "taggables"."taggable_id" = "posts"."id"
      AND "taggables"."taggable_type" = 'App\Models\Post'
  ) AS "tags_count"
FROM "posts"
WHERE "published" = true
ORDER BY "created_at" DESC
LIMIT 25 OFFSET 0;

-- Three correlated scalar subqueries.
-- Each one references "posts"."id" from the outer query.
-- For every row in the outer result, Postgres re-executes all three.

Three correlated scalar subqueries. Each one references "posts"."id" from the outer query. This is the critical detail: correlated subqueries are not executed once. They are executed once per row of the outer query.

I want to be precise about why. A non-correlated subquery — one that does not reference the outer query — can be evaluated once and cached. Postgres treats it as a constant. A correlated subquery, by definition, produces a different result for each outer row. WHERE "comments"."post_id" = "posts"."id" means the answer depends on which post we are evaluating. Postgres cannot cache it. It must re-execute.

PostgreSQL's optimizer handles this differently from MySQL. MySQL can sometimes flatten correlated subqueries into derived tables or semi-joins. PostgreSQL, as of version 16, evaluates correlated scalar subqueries exactly as written: nested loop, one execution per outer row. The optimizer does not rewrite them into joins. It is not that PostgreSQL cannot do this — there are active discussions in the pgsql-hackers mailing list about correlated subquery decorrelation — but as of the current release, it does not.

For 8,400 published posts with 3 correlated subqueries each, that is 25,200 subquery executions. Every single one scans its respective table looking for matching rows.

What EXPLAIN ANALYZE reveals

Numbers, not speculation. Here is the EXPLAIN ANALYZE output:

EXPLAIN ANALYZE output
-- EXPLAIN ANALYZE on a dataset of:
-- 12,000 posts, 185,000 comments, 420,000 likes, 38,000 tag assignments

QUERY PLAN
---------------------------------------------------------------------
 Limit  (cost=0.00..4812.50 rows=25 width=312)
        (actual time=0.089..3842.117 rows=25 loops=1)
   ->  Sort  (cost=0.00..2310000.00 rows=12000 width=312)
              (actual time=0.087..3842.089 rows=25 loops=1)
         Sort Key: posts.created_at DESC
         Sort Method: top-N heapsort  Memory: 32kB
         ->  Seq Scan on posts  (cost=0.00..2310000.00 rows=12000 width=312)
                    (actual time=0.041..3841.204 rows=8400 loops=1)
               Filter: (published = true)
               Rows Removed by Filter: 3600
               SubPlan 1
                 ->  Aggregate  (cost=64.00..64.01 rows=1 width=8)
                           (actual time=0.198..0.198 rows=1 loops=8400)
                       ->  Seq Scan on comments
                                 (cost=0.00..63.50 rows=15 width=0)
                                 (actual time=0.091..0.194 rows=22 loops=8400)
                             Filter: (post_id = posts.id)
               SubPlan 2
                 ->  Aggregate  (cost=142.00..142.01 rows=1 width=8)
                           (actual time=0.204..0.204 rows=1 loops=8400)
                       ->  Seq Scan on likes
                                 (cost=0.00..141.50 rows=35 width=0)
                                 (actual time=0.098..0.199 rows=50 loops=8400)
                             Filter: (post_id = posts.id)
               SubPlan 3
                 ->  Aggregate  (cost=28.00..28.01 rows=1 width=8)
                           (actual time=0.051..0.051 rows=1 loops=8400)
                       ->  Nested Loop
                                 (cost=0.00..27.80 rows=4 width=0)
                                 (actual time=0.031..0.049 rows=4 loops=8400)
                             ->  Seq Scan on taggables
                                       (actual time=0.012..0.028 rows=4 loops=8400)
                                   Filter: (taggable_id = posts.id
                                     AND taggable_type = 'App\Models\Post')
                             ->  Index Scan on tags_pkey
                                       (actual time=0.004..0.004 rows=1 loops=33600)

 Planning Time: 1.284 ms
 Execution Time: 3842.891 ms

-- 3.8 seconds. For 25 rows.
-- Notice: loops=8400 on every SubPlan.
-- Postgres evaluates ALL 8,400 published posts before sorting and limiting.
-- Each post triggers 3 subquery executions = 25,200 subquery runs total.

3.8 seconds for 25 rows. The diagnostic signature is in the loops=8400 annotation on every SubPlan node. That number tells you: Postgres executed this subquery 8,400 times. Multiply by 3 subqueries, and you have 25,200 individual aggregation operations.

The particularly painful detail: Postgres evaluates all 8,400 published posts through the subqueries before sorting by created_at and applying the LIMIT 25. It cannot push the limit into the subqueries because it needs every row's subquery results to determine sort order. You asked for 25 posts. Postgres computed counts for 8,400.

Without indexes on the foreign key columns, each subquery execution performs a sequential scan. The comments table (185,000 rows) gets seq-scanned 8,400 times. That is 1.5 billion row comparisons for a single page load.

Even with indexes, the situation improves only modestly. An index scan on comments.post_id reduces each subquery execution from a seq-scan to an index lookup — roughly 0.05ms instead of 0.19ms per execution. Multiply by 8,400 loops and you save perhaps 1.2 seconds. The query drops from 3.8 seconds to 2.6 seconds. An improvement. Still unacceptable for a paginated list. The fundamental problem — 25,200 individual executions — remains unchanged by indexing alone.

The orderBy problem: GitHub discussion #45479

If correlated subqueries were merely slow, you could work around them. But withCount on PostgreSQL has a second issue that moves from "slow" to "broken."

Ordering by count values
// The issue that breaks entirely on PostgreSQL.
// Laravel GitHub discussion #45479.

$posts = Post::query()
    ->withCount('comments')
    ->orderBy('comments_count', 'desc')  // Order by the count
    ->paginate(25);

// On MySQL, this works fine.
// On PostgreSQL, this can produce:
//
//   ERROR: column "comments_count" does not exist
//   LINE 1: ... ORDER BY "comments_count" DESC
//
// PostgreSQL fully supports ORDER BY with SELECT-list aliases.
// The issue is how Eloquent constructs the query — it wraps the
// alias in quotes or restructures the query in a way that breaks
// alias resolution. This is an ORM query-generation problem,
// not a PostgreSQL limitation.
//
// The workaround:
$posts = Post::query()
    ->withCount('comments')
    ->orderByDesc('comments_count')  // Use Eloquent's orderByDesc helper
    ->paginate(25);

// Or use orderByRaw:
$posts = Post::query()
    ->withCount('comments')
    ->orderByRaw('(SELECT COUNT(*) FROM comments WHERE comments.post_id = posts.id) DESC')
    ->paginate(25);

// Both approaches work, but the raw version makes the cost visible:
// you are now sorting 8,400 rows by a correlated subquery.
// Postgres must execute that subquery for EVERY row before it can sort.

The core issue: Eloquent generates an alias (comments_count) in the SELECT list and uses that alias in the ORDER BY clause. PostgreSQL's SQL parser handles alias resolution differently from MySQL. In certain query structures — particularly when subqueries are involved — PostgreSQL cannot resolve the alias in the ORDER BY, and the query fails with a column-not-found error.

The framework discussion documents this. The workaround is orderByDesc('comments_count') or orderByRaw() with the full subquery. Both work syntactically. Neither addresses the underlying performance problem: you are now asking Postgres to sort 8,400 rows by the result of a correlated subquery, which means the subquery must be evaluated for every row before the sort can begin.

This is the use case where withCount is not just slow — it is architecturally wrong for PostgreSQL. Sorting by a correlated subquery result requires the full materialization of every outer row's subquery value. There is no index to bail you out. There is no partial evaluation. Every row, every subquery, every time.

A brief note on MySQL compatibility

I should be forthcoming about something: withCount() was designed for MySQL. Eloquent's query builder was born on MySQL, and many of its generated SQL patterns reflect MySQL's optimizer strengths. This is not criticism — it is context.

MySQL vs PostgreSQL optimizer behavior
-- The same withCount query on MySQL 8.0:

-- MySQL's optimizer can sometimes transform correlated scalar
-- subqueries into derived tables (a technique called
-- "subquery materialization"). When it does, the subquery
-- executes once, not per-row.

-- EXPLAIN on MySQL 8.0.35:
-- +----+--------------------+----------+------+------+----------+
-- | id | select_type        | table    | type | rows | filtered |
-- +----+--------------------+----------+------+------+----------+
-- |  1 | PRIMARY            | posts    | ALL  | 8400 |   100.00 |
-- |  2 | DEPENDENT SUBQUERY | comments | ref  |   22 |   100.00 |
-- |  3 | DEPENDENT SUBQUERY | likes    | ref  |   50 |   100.00 |
-- |  4 | DEPENDENT SUBQUERY | taggables| ref  |    4 |   100.00 |
-- +----+--------------------+----------+------+------+----------+
--
-- Still DEPENDENT SUBQUERY — still correlated.
-- But MySQL often executes these faster because its
-- optimizer has a "subquery cache" that memoizes results
-- for repeated outer values. If many posts share the same
-- distribution, MySQL avoids redundant computations.
--
-- PostgreSQL does not have a subquery result cache.
-- Each execution is independent. No memoization.
--
-- This is not a PostgreSQL deficiency — it is a design choice.
-- PostgreSQL's optimizer invests in join rewriting and
-- parallel execution instead. The trade-off matters when
-- your ORM generates queries that rely on the optimization
-- PostgreSQL chose not to implement.

MySQL's subquery cache can memoize correlated subquery results for repeated outer values. If your posts table has clustering patterns — which it often does, since posts are created sequentially and foreign keys are assigned sequentially — MySQL may avoid redundant computations. PostgreSQL does not maintain a subquery result cache. Each execution is independent.

This matters because it explains why withCount may work acceptably on MySQL at a dataset size where it becomes untenable on PostgreSQL. If your team migrated from MySQL to PostgreSQL — or if your ORM documentation was written with MySQL benchmarks — the performance gap is worth understanding.

I am not suggesting that MySQL's approach is superior. PostgreSQL's investment in join rewriting, parallel query execution, and advanced statistics produces faster results for the vast majority of query patterns. But correlated scalar subqueries in the SELECT clause happen to be the pattern where MySQL's optimizer has an advantage. Acknowledging this is not a weakness — it is the prerequisite for solving the problem correctly.

Conditional withCount: the quiet multiplier

Before we proceed to fixes, I should draw your attention to a pattern that compounds the problem beyond what the basic example suggests.

Conditional withCount
// withCount with conditions — a common pattern that
// multiplies the cost even further.

$posts = Post::query()
    ->withCount([
        'comments',
        'comments as approved_comments_count' => function ($query) {
            $query->where('approved', true);
        },
        'comments as pending_comments_count' => function ($query) {
            $query->where('approved', false);
        },
        'likes',
        'tags',
    ])
    ->where('published', true)
    ->paginate(25);

// Five correlated subqueries per row.
// On 8,400 published posts: 42,000 subquery executions.
// Three of them scan the same comments table with slightly
// different WHERE clauses.
//
// The JOIN approach handles this elegantly:
$posts = Post::query()
    ->select('posts.*')
    ->selectRaw('COUNT(DISTINCT comments.id) as comments_count')
    ->selectRaw('COUNT(DISTINCT CASE WHEN comments.approved THEN comments.id END) as approved_comments_count')
    ->selectRaw('COUNT(DISTINCT CASE WHEN NOT comments.approved THEN comments.id END) as pending_comments_count')
    ->selectRaw('COUNT(DISTINCT likes.id) as likes_count')
    ->leftJoin('comments', 'comments.post_id', '=', 'posts.id')
    ->leftJoin('likes', 'likes.post_id', '=', 'posts.id')
    ->where('posts.published', true)
    ->groupBy('posts.id')
    ->orderBy('posts.created_at', 'desc')
    ->paginate(25);

// One pass. CASE expressions inside COUNT are nearly free —
// they are evaluated during the aggregation, not as separate scans.

Five correlated subqueries per row. On 8,400 published posts: 42,000 subquery executions. Three of them scan the same comments table with slightly different WHERE clauses. Postgres has no mechanism to share work between them — each is an independent nested loop execution.

The JOIN approach handles this naturally. A single LEFT JOIN to the comments table provides the raw data, and CASE expressions inside COUNT(DISTINCT ...) carve it into approved, pending, and total counts during aggregation. One table scan instead of three. The conditional logic moves from "execute three subqueries" to "evaluate three expressions per row during a single aggregation pass." The difference is not incremental. It is categorical.

Four ways to fix it

Each approach trades something different. The right choice depends on your access patterns, your tolerance for raw SQL, and whether counts are read-heavy or write-heavy in your application.

Fix 1: LEFT JOIN with GROUP BY

Replace the correlated subqueries with explicit joins. More verbose, substantially faster.

LEFT JOIN approach
// Fix 1: Replace withCount with a manual LEFT JOIN + GROUP BY.
// More SQL, but dramatically faster.

$posts = Post::query()
    ->select('posts.*')
    ->selectRaw('COUNT(DISTINCT comments.id) as comments_count')
    ->selectRaw('COUNT(DISTINCT likes.id) as likes_count')
    ->leftJoin('comments', 'comments.post_id', '=', 'posts.id')
    ->leftJoin('likes', 'likes.post_id', '=', 'posts.id')
    ->where('posts.published', true)
    ->groupBy('posts.id')
    ->orderBy('posts.created_at', 'desc')
    ->paginate(25);

// The generated SQL:
// SELECT posts.*, COUNT(DISTINCT comments.id) as comments_count,
//        COUNT(DISTINCT likes.id) as likes_count
// FROM posts
// LEFT JOIN comments ON comments.post_id = posts.id
// LEFT JOIN likes ON likes.post_id = posts.id
// WHERE posts.published = true
// GROUP BY posts.id
// ORDER BY posts.created_at DESC
// LIMIT 25 OFFSET 0
//
// One pass through the data. No correlated subqueries.
// No loops=8400. Just hash joins and a single aggregation.

Here is what Postgres does with this query:

EXPLAIN ANALYZE — LEFT JOIN
-- EXPLAIN ANALYZE for the LEFT JOIN + GROUP BY approach:

QUERY PLAN
---------------------------------------------------------------------
 Limit  (cost=1842.50..1842.56 rows=25 width=320)
        (actual time=41.204..41.218 rows=25 loops=1)
   ->  Sort  (cost=1842.50..1863.50 rows=8400 width=320)
              (actual time=41.202..41.208 rows=25 loops=1)
         Sort Key: posts.created_at DESC
         Sort Method: top-N heapsort  Memory: 35kB
         ->  HashAggregate  (cost=1624.00..1708.00 rows=8400 width=320)
                    (actual time=38.891..40.412 rows=8400 loops=1)
               Group Key: posts.id
               ->  Hash Left Join  (cost=412.00..1204.00 rows=420000 width=24)
                         (actual time=4.891..22.108 rows=482400 loops=1)
                     Hash Cond: (posts.id = likes.post_id)
                     ->  Hash Left Join  (cost=198.00..812.00 rows=185000 width=16)
                               (actual time=2.412..11.204 rows=193400 loops=1)
                           Hash Cond: (posts.id = comments.post_id)
                           ->  Seq Scan on posts  (cost=0.00..224.00 rows=8400 width=8)
                                     (actual time=0.012..1.204 rows=8400 loops=1)
                                 Filter: (published = true)
                           ->  Hash  (cost=148.00..148.00 rows=185000 width=8)
                                     (actual time=2.108..2.108 rows=185000 loops=1)
                               ->  Seq Scan on comments
                                         (actual time=0.004..0.891 rows=185000 loops=1)
                     ->  Hash  (cost=164.00..164.00 rows=420000 width=8)
                               (actual time=2.204..2.204 rows=420000 loops=1)
                           ->  Seq Scan on likes
                                     (actual time=0.004..1.108 rows=420000 loops=1)

 Planning Time: 0.891 ms
 Execution Time: 41.442 ms

-- 41ms. Down from 3,843ms.
-- The key difference: loops=1 on every node.
-- Hash joins process the entire table once, not per-row.

41ms. Down from 3,843ms. A 93x improvement.

The critical difference is in the loops column: every node shows loops=1. Postgres processes each table once using hash joins, groups the results, sorts, and limits. No per-row subquery execution.

The Cartesian trap

Allow me a word of caution before you adopt this approach universally.

The Cartesian product issue
// The Cartesian trap with multiple LEFT JOINs.
// If a post has 20 comments and 35 likes, the intermediate
// result before GROUP BY contains 20 × 35 = 700 rows for
// that single post.

// Without DISTINCT:
->selectRaw('COUNT(comments.id) as comments_count')
->selectRaw('COUNT(likes.id) as likes_count')
// comments_count = 700 (inflated by likes join)
// likes_count = 700 (inflated by comments join)

// With DISTINCT:
->selectRaw('COUNT(DISTINCT comments.id) as comments_count')
->selectRaw('COUNT(DISTINCT likes.id) as likes_count')
// comments_count = 20 (correct)
// likes_count = 35 (correct)

// The cost: COUNT(DISTINCT) requires sorting or hashing
// each group to eliminate duplicates. On large datasets,
// this adds measurable overhead — typically 10-20% more
// time compared to COUNT(*).
//
// For two relations, this is acceptable.
// For four or five relations, the Cartesian expansion
// becomes the dominant cost. At that point, consider
// separate queries or LATERAL joins instead.

When you LEFT JOIN two one-to-many relationships to the same parent, the intermediate result set contains the Cartesian product of both. A post with 20 comments and 35 likes produces 700 intermediate rows. COUNT(DISTINCT ...) corrects the final number, but the intermediate expansion consumes memory and CPU proportional to the product of the relation sizes.

For two relations, the overhead is manageable. For four or five, the Cartesian product can dwarf the original data. A post with 20 comments, 35 likes, 5 tags, and 12 bookmarks produces 20 × 35 × 5 × 12 = 42,000 intermediate rows. At that point, the JOIN approach becomes slower than separate queries. Know when to switch strategies.

Fix 2: Separate COUNT queries

The most pragmatic approach when counts are needed only for the displayed page.

Separate queries approach
// Fix 2: Run separate COUNT queries.
// The simplest approach when you need counts for a small result set.

$posts = Post::query()
    ->where('published', true)
    ->orderBy('created_at', 'desc')
    ->paginate(25);

// Now load counts only for the 25 posts we actually need:
$postIds = $posts->pluck('id');

$commentCounts = DB::table('comments')
    ->selectRaw('post_id, COUNT(*) as count')
    ->whereIn('post_id', $postIds)
    ->groupBy('post_id')
    ->pluck('count', 'post_id');

$likeCounts = DB::table('likes')
    ->selectRaw('post_id, COUNT(*) as count')
    ->whereIn('post_id', $postIds)
    ->groupBy('post_id')
    ->pluck('count', 'post_id');

$tagCounts = DB::table('taggables')
    ->selectRaw('taggable_id, COUNT(*) as count')
    ->where('taggable_type', 'App\\Models\\Post')
    ->whereIn('taggable_id', $postIds)
    ->groupBy('taggable_id')
    ->pluck('count', 'taggable_id');

// Attach to the collection:
$posts->each(function ($post) use ($commentCounts, $likeCounts, $tagCounts) {
    $post->comments_count = $commentCounts[$post->id] ?? 0;
    $post->likes_count = $likeCounts[$post->id] ?? 0;
    $post->tags_count = $tagCounts[$post->id] ?? 0;
});

// 4 queries total:
//   1. SELECT posts (the paginated base query)
//   2. SELECT COUNT GROUP BY from comments WHERE post_id IN (1..25)
//   3. SELECT COUNT GROUP BY from likes WHERE post_id IN (1..25)
//   4. SELECT COUNT GROUP BY from taggables WHERE taggable_id IN (1..25)
//
// Each COUNT query scans at most 25 groups. Total: ~6ms.

Four queries instead of one. Total execution time: approximately 6ms. Each COUNT query uses WHERE IN with exactly 25 IDs, hitting an index and aggregating a tiny result set.

This is the N+1 pattern inverted: instead of one query per row, you run one query per relationship. Three relationships, three queries, each efficient. The overhead of three round trips to PostgreSQL is roughly 1ms on a local connection — trivial compared to 3.8 seconds of correlated subquery computation.

The verbosity is real, though. Let me show you how to encapsulate it:

Reusable scope
// Encapsulate the pattern in a reusable trait or scope.
// The verbosity is a one-time cost.

// app/Traits/WithCountsSeparate.php
trait WithCountsSeparate
{
    public function scopeLoadCounts($query, array $relations)
    {
        return $query->afterQuery(function ($posts) use ($relations) {
            if ($posts->isEmpty()) return $posts;

            $ids = $posts->pluck('id');

            foreach ($relations as $relation) {
                $table = $this->$relation()->getRelated()->getTable();
                $fk = $this->$relation()->getForeignKeyName();

                $counts = DB::table($table)
                    ->selectRaw("{$fk}, COUNT(*) as aggregate")
                    ->whereIn($fk, $ids)
                    ->groupBy($fk)
                    ->pluck('aggregate', $fk);

                $posts->each(fn ($post) =>
                    $post->setAttribute(
                        "{$relation}_count",
                        $counts[$post->id] ?? 0
                    )
                );
            }

            return $posts;
        });
    }
}

// Usage — same ergonomics, different execution plan:
$posts = Post::query()
    ->where('published', true)
    ->orderBy('created_at', 'desc')
    ->loadCounts(['comments', 'likes', 'tags'])
    ->paginate(25);

The trait handles the mechanical work — plucking IDs, running grouped counts, attaching results. The calling code reads nearly as cleanly as withCount(). The difference is invisible to the developer and dramatic to PostgreSQL.

Fix 3: LATERAL joins

The PostgreSQL-native solution. Correlated, but with a crucial difference in how the optimizer handles them.

LATERAL join approach
-- Fix 3: LATERAL joins — the PostgreSQL-native approach.
-- Best when you need counts AND want to keep everything in one query.

SELECT
  p.*,
  c.comments_count,
  l.likes_count,
  t.tags_count
FROM posts p
LEFT JOIN LATERAL (
  SELECT COUNT(*) AS comments_count
  FROM comments
  WHERE comments.post_id = p.id
) c ON true
LEFT JOIN LATERAL (
  SELECT COUNT(*) AS likes_count
  FROM likes
  WHERE likes.post_id = p.id
) l ON true
LEFT JOIN LATERAL (
  SELECT COUNT(*) AS tags_count
  FROM taggables
  WHERE taggables.taggable_id = p.id
    AND taggables.taggable_type = 'App\Models\Post'
) t ON true
WHERE p.published = true
ORDER BY p.created_at DESC
LIMIT 25;

-- "Wait — aren't LATERAL joins also correlated?"
-- Yes. But there is a critical difference.
-- With LATERAL, Postgres applies the LIMIT BEFORE evaluating the subqueries.
-- It fetches the 25 posts first (using an index on created_at),
-- then runs the lateral subqueries only for those 25 rows.
-- That is 75 subquery executions, not 25,200.
--
-- With Eloquent's withCount, Postgres evaluates all 8,400 published posts
-- through the subqueries, THEN sorts, THEN limits.
-- The optimizer cannot push the LIMIT into the correlated subqueries
-- because it needs all results to sort by created_at.

LATERAL joins are semantically similar to correlated subqueries — they reference the outer query. But Postgres handles them differently in the execution plan. When combined with a LIMIT and an indexed ORDER BY, the optimizer can fetch the 25 posts from an index first, then run the lateral subqueries only for those 25 rows. That is 75 subquery executions instead of 25,200.

Here is the proof:

EXPLAIN ANALYZE — LATERAL join
-- EXPLAIN ANALYZE for the LATERAL join with partial index:

QUERY PLAN
---------------------------------------------------------------------
 Limit  (cost=0.42..84.67 rows=25 width=320)
        (actual time=0.112..3.841 rows=25 loops=1)
   ->  Nested Loop Left Join  (cost=0.42..2842.00 rows=8400 width=320)
              (actual time=0.110..3.829 rows=25 loops=1)
         ->  Nested Loop Left Join  (cost=0.42..2124.00 rows=8400 width=316)
                    (actual time=0.089..3.412 rows=25 loops=1)
               ->  Nested Loop Left Join  (cost=0.42..1412.00 rows=8400 width=312)
                          (actual time=0.071..2.891 rows=25 loops=1)
                     ->  Index Scan Backward using idx_posts_published_created
                           on posts p  (cost=0.29..412.00 rows=8400 width=304)
                           (actual time=0.031..0.089 rows=25 loops=1)
                     ->  Aggregate  (cost=0.13..0.14 rows=1 width=8)
                           (actual time=0.108..0.108 rows=1 loops=25)
                           ->  Index Only Scan using idx_comments_post_id
                                 on comments  (actual time=0.041..0.091 rows=22 loops=25)
                                 Index Cond: (post_id = p.id)
               ->  Aggregate  (cost=0.13..0.14 rows=1 width=8)
                     (actual time=0.018..0.018 rows=1 loops=25)
                     ->  Index Only Scan using idx_likes_post_id
                           on likes  (actual time=0.008..0.014 rows=50 loops=25)
                           Index Cond: (post_id = p.id)
         ->  Aggregate  (cost=0.13..0.14 rows=1 width=8)
               (actual time=0.014..0.014 rows=1 loops=25)
               ->  Index Scan using idx_taggables_type_id
                     on taggables  (actual time=0.006..0.011 rows=4 loops=25)
                     Index Cond: (taggable_type = 'App\Models\Post'
                                  AND taggable_id = p.id)

 Planning Time: 0.412 ms
 Execution Time: 3.924 ms

-- 4ms. loops=25 on every subquery node.
-- The partial index on posts delivers exactly 25 rows
-- via Index Scan Backward. The LATERAL subqueries
-- execute 25 times each. Total: 75 subquery runs.
-- Compare to withCount: 25,200 subquery runs in 3,843ms.

4ms. loops=25 on every subquery node. The partial index on posts delivers exactly 25 rows via Index Scan Backward, and the LATERAL subqueries execute only for those rows. This is the fastest single-query approach when ordering by a column on the parent table.

Using LATERAL from Laravel
// Using LATERAL joins from Eloquent with DB::raw:

$posts = DB::select("
    SELECT p.*, c.comments_count, l.likes_count, t.tags_count
    FROM posts p
    LEFT JOIN LATERAL (
        SELECT COUNT(*) AS comments_count
        FROM comments WHERE comments.post_id = p.id
    ) c ON true
    LEFT JOIN LATERAL (
        SELECT COUNT(*) AS likes_count
        FROM likes WHERE likes.post_id = p.id
    ) l ON true
    LEFT JOIN LATERAL (
        SELECT COUNT(*) AS tags_count
        FROM taggables
        WHERE taggables.taggable_id = p.id
          AND taggables.taggable_type = ?
    ) t ON true
    WHERE p.published = true
    ORDER BY p.created_at DESC
    LIMIT ? OFFSET ?
", ['App\\Models\\Post', 25, 0]);

// Or wrap it in a scope for reuse:
// Post::scopeWithCountsLateral() — raw SQL, but encapsulated.

LATERAL's honest limitation

I should be clear about where LATERAL joins lose their advantage.

LATERAL with ORDER BY count
-- LATERAL + ordering by count — where LATERAL truly shines.
-- This is the use case where withCount fails both in performance
-- AND correctness on PostgreSQL.

SELECT
  p.*,
  c.comments_count
FROM posts p
LEFT JOIN LATERAL (
  SELECT COUNT(*) AS comments_count
  FROM comments
  WHERE comments.post_id = p.id
) c ON true
WHERE p.published = true
ORDER BY c.comments_count DESC
LIMIT 25;

-- With withCount, ordering by comments_count forces Postgres
-- to evaluate the subquery for ALL 8,400 rows before sorting.
-- There is no shortcut — it needs every value to determine order.
--
-- LATERAL has the same constraint here: Postgres must still
-- evaluate all 8,400 lateral subqueries to sort by count.
-- But each LATERAL subquery benefits from index scans, and
-- the optimizer can sometimes use a merge join strategy
-- that withCount's scalar subqueries cannot access.
--
-- For ORDER BY count queries specifically, the materialized
-- count column (Fix 4) is the definitive answer.

When you order by the count itself, LATERAL cannot shortcut. Postgres must evaluate all 8,400 lateral subqueries to determine the sort order, just as it does with withCount. The LATERAL approach is faster here — each subquery benefits from index scans and the optimizer can sometimes batch executions — but the asymptotic behavior is the same: O(n) subquery evaluations where n is the total matching rows, not the page size.

For "order by count" queries, the materialized count column is the definitive answer.

Fix 4: Materialized count columns

Maintain the count directly on the parent table. The fastest possible read, at the cost of write-time maintenance.

Count columns with triggers
-- Fix 4: Maintain a count column directly on the parent table.
-- The fastest possible read, at the cost of write-time maintenance.

ALTER TABLE posts ADD COLUMN comments_count INTEGER DEFAULT 0 NOT NULL;
ALTER TABLE posts ADD COLUMN likes_count INTEGER DEFAULT 0 NOT NULL;
ALTER TABLE posts ADD COLUMN tags_count INTEGER DEFAULT 0 NOT NULL;

-- Backfill existing counts:
UPDATE posts SET comments_count = (
    SELECT COUNT(*) FROM comments WHERE comments.post_id = posts.id
);
UPDATE posts SET likes_count = (
    SELECT COUNT(*) FROM likes WHERE likes.post_id = posts.id
);
UPDATE posts SET tags_count = (
    SELECT COUNT(*) FROM taggables
    WHERE taggables.taggable_id = posts.id
      AND taggables.taggable_type = 'App\Models\Post'
);

-- Maintain with triggers:
CREATE OR REPLACE FUNCTION update_post_comments_count()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'INSERT' THEN
        UPDATE posts SET comments_count = comments_count + 1
        WHERE id = NEW.post_id;
    ELSIF TG_OP = 'DELETE' THEN
        UPDATE posts SET comments_count = comments_count - 1
        WHERE id = OLD.post_id;
    END IF;
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER trg_comments_count
    AFTER INSERT OR DELETE ON comments
    FOR EACH ROW EXECUTE FUNCTION update_post_comments_count();
Laravel observer approach
// In Laravel, the observer pattern achieves the same result:

class CommentObserver
{
    public function created(Comment $comment)
    {
        $comment->post()->increment('comments_count');
    }

    public function deleted(Comment $comment)
    {
        $comment->post()->decrement('comments_count');
    }

    // Handle soft deletes if applicable:
    public function restored(Comment $comment)
    {
        $comment->post()->increment('comments_count');
    }

    public function forceDeleted(Comment $comment)
    {
        $comment->post()->decrement('comments_count');
    }
}

// Register in AppServiceProvider:
Comment::observe(CommentObserver::class);

// Now the query is trivial:
$posts = Post::query()
    ->where('published', true)
    ->orderBy('comments_count', 'desc')  // Sort by count — instant
    ->paginate(25);

// No subqueries. No joins. No separate queries.
// The count lives on the row. Postgres reads 25 rows from an index.

The query becomes trivial: SELECT * FROM posts ORDER BY comments_count DESC LIMIT 25. No subqueries. No joins. No separate queries. A single index scan. 2ms.

The cost is operational complexity. Every write to the comments table must update the count on the parent post. Triggers handle this at the database level; Laravel observers handle it at the application level. Both approaches must account for edge cases: soft deletes, bulk operations, data imports that bypass the ORM, race conditions when multiple requests create comments on the same post simultaneously.

The trigger approach is more reliable — it fires regardless of how comments are inserted — but it introduces logic into the database layer that many teams prefer to keep in the application. The observer approach keeps logic in Laravel but fails silently when comments are created via raw SQL, queue workers that skip observers, or direct database operations.

For read-heavy workloads where counts are displayed on every page load and updated infrequently, materialized count columns are unambiguously the right choice. For write-heavy workloads where comments are created at high volume, the INCREMENT/DECREMENT operations on the parent row create lock contention. At that point, you are trading query-time performance for write-time contention — a different problem, but still a problem. I have written about this trade-off in more detail in the counter cache showdown.

"A materialized view that pre-computes your most expensive aggregation, refreshed on a schedule, can eliminate more load than any amount of query tuning. The question is not whether to cache the result — it is where."

— from You Don't Need Redis, Chapter 4: A Proper Introduction to PostgreSQL Materialized Views

Benchmark comparison

All benchmarks on the same dataset: 12,000 posts (8,400 published), 185,000 comments, 420,000 likes, 38,000 tag assignments. PostgreSQL 16, default configuration, appropriate indexes present.

ApproachSubqueriesWall timeSubquery runs
withCount (1 relation)11,240 ms8,400
withCount (3 relations)33,843 ms25,200
withCount + orderBy count1 (sort)1,890 ms8,400
LEFT JOIN + GROUP BY (1 rel)018 ms0
LEFT JOIN + GROUP BY (3 rels)042 ms0
Separate COUNT queries06 ms0
LATERAL + partial index3 (bounded)4 ms75
Materialized count column02 ms0

The pattern is clear. Each withCount relation multiplies execution time roughly linearly with the number of rows evaluated. The JOIN and separate-query approaches are constant relative to the outer result set size because they process each table once.

At 50,000 posts, the withCount approach exceeds 15 seconds. The JOIN approach reaches 120ms. The gap does not close.

How it scales: the widening gap

The benchmark above captures a single dataset size. Allow me to show how each approach behaves as the posts table grows. This is the chart that should inform your architectural decision.

Published postswithCount (1 rel)withCount (3 rels)JOIN (3 rels)Separate queriesLATERAL + index
1,000148 ms441 ms8 ms3 ms2 ms
5,000720 ms2,140 ms22 ms4 ms3 ms
12,0001,240 ms3,843 ms42 ms6 ms4 ms
50,0005,180 ms15,420 ms118 ms8 ms4 ms
100,00010,400 ms31,200 ms241 ms9 ms5 ms

The withCount columns scale linearly with the total number of published posts — not with the page size. The LATERAL and separate-query columns remain nearly flat because they operate only on the 25 posts being displayed. The JOIN approach scales with the total dataset (it must hash-join all rows before grouping), but the constant factor is so much smaller that it remains practical well into six-figure row counts.

At 100,000 published posts, the three-relation withCount takes 31 seconds. Your request timeout is almost certainly shorter than that. The user sees a 504 Gateway Timeout. The withCount call that worked fine at launch is now a production incident.

Indexes that matter

Regardless of which fix you choose, these indexes are non-negotiable for count-heavy workloads.

Essential indexes
-- Indexes that make every approach faster.
-- These are non-negotiable for count-heavy workloads.

-- For comments.post_id lookups:
CREATE INDEX idx_comments_post_id ON comments (post_id);

-- For likes.post_id lookups:
CREATE INDEX idx_likes_post_id ON likes (post_id);

-- For polymorphic tag lookups:
CREATE INDEX idx_taggables_type_id ON taggables (taggable_type, taggable_id);

-- For the base query ordering:
CREATE INDEX idx_posts_published_created ON posts (created_at DESC)
  WHERE published = true;

-- That partial index on posts is particularly important.
-- Without it, Postgres seq-scans 12,000 posts to find 8,400 published ones.
-- With it, the LIMIT 25 ORDER BY created_at DESC fetches exactly 25 rows
-- from the index, and the LATERAL subqueries run only 25 times.

-- Verify with EXPLAIN ANALYZE:
-- Before index: Seq Scan on posts ... loops=8400
-- After index:  Index Scan Backward using idx_posts_published_created
--               ... rows=25 loops=1

The partial index on posts deserves emphasis. Without it, every approach — including withCount — must evaluate all 12,000 posts to find the 8,400 published ones. With it, the LIMIT 25 ORDER BY created_at DESC fetches exactly 25 rows directly from the index. For the LATERAL join approach, this reduces subquery executions from 25,200 to 75. For the JOIN approach, it reduces the base table scan from 12,000 rows to 25.

The foreign key indexes (comments.post_id, likes.post_id) are equally critical. Laravel's migrations do not create these automatically. $table->foreignId('post_id')->constrained() adds the foreign key constraint but the index behavior depends on your migration — check your schema and add them explicitly if missing.

Covering indexes for count queries

If you are using the LATERAL or separate-query approach, covering indexes can eliminate heap access entirely.

Covering indexes
-- Covering indexes for count-only queries.
-- When the subquery only needs COUNT(*), an index-only scan
-- avoids touching the heap entirely.

-- For comments counted by post_id:
CREATE INDEX idx_comments_post_id_covering ON comments (post_id)
  INCLUDE (id);

-- For likes counted by post_id:
CREATE INDEX idx_likes_post_id_covering ON likes (post_id)
  INCLUDE (id);

-- The INCLUDE columns allow COUNT(DISTINCT id) via index-only scan.
-- Postgres reads the index without fetching table pages.
-- On large tables, this can halve the I/O.

-- For the polymorphic taggables table, a composite index:
CREATE INDEX idx_taggables_lookup ON taggables (taggable_type, taggable_id)
  INCLUDE (tag_id);

-- Verify with EXPLAIN ANALYZE:
-- Look for "Index Only Scan" instead of "Index Scan"
-- and "Heap Fetches: 0" (after a recent VACUUM).

A covering index includes all columns needed by the query in the index itself. For a COUNT(*) with a WHERE clause on post_id, the index on (post_id) with INCLUDE (id) allows Postgres to satisfy the aggregation without touching the table heap. On tables with wide rows — comments with a TEXT body column, for instance — this can reduce I/O by an order of magnitude.

The caveat: covering indexes increase the size of the index on disk, and they must be maintained on every INSERT, UPDATE, and DELETE. For write-heavy tables, the additional index maintenance overhead may outweigh the read-time savings. Measure on your workload.

Finding correlated subqueries in production

The withCount pattern has a distinctive signature in pg_stat_statements: a query with embedded SELECT COUNT(*) subqueries that shows high execution time relative to its row count.

Diagnostic query
-- Find correlated subquery patterns in your running PostgreSQL.
-- These are the withCount() signatures hiding in production.

SELECT
  left(query, 120) AS query_preview,
  calls,
  rows / NULLIF(calls, 0) AS avg_rows,
  mean_exec_time::numeric(10,2) AS avg_ms,
  total_exec_time::numeric(10,2) AS total_ms
FROM pg_stat_statements
WHERE query ILIKE '%SELECT COUNT(%)%FROM%WHERE%.post_id =%'
   OR query ILIKE '%SELECT COUNT(%)%FROM%WHERE%.taggable_id =%'
ORDER BY total_exec_time DESC
LIMIT 20;

-- Also check for the telltale sign: high calls with low rows.
-- A correlated subquery that runs 8,400 times per parent query
-- will show calls = 8,400 * (number of parent query executions).
-- If you see a simple COUNT(*) query with 500,000+ calls, that is
-- a correlated subquery being re-executed per parent row.

You can also spot them in the EXPLAIN ANALYZE output by searching for SubPlan nodes with high loops values. Any SubPlan where loops exceeds a few hundred is a correlated subquery being re-executed per parent row. If the loops value matches the row count of the outer scan, you have found your withCount.

Catching them in development

Development detection
-- Audit your withCount queries directly.
-- Add this to a dev middleware or tinker session:

DB::listen(function ($query) {
    if (str_contains($query->sql, 'SELECT COUNT(*)') &&
        str_contains($query->sql, 'AS "') &&
        $query->time > 500) {
        Log::warning('Slow correlated subquery detected', [
            'sql' => $query->sql,
            'time_ms' => $query->time,
            'bindings' => $query->bindings,
        ]);
    }
});

// In production, pg_stat_statements is the safer diagnostic.
// In development, DB::listen catches them before they ship.

The DB::listen callback fires for every query. Filtering for SELECT COUNT(*) patterns with execution times above 500ms catches the most egregious withCount usage before it reaches production. The binding values and SQL are logged for easy identification of which controller, resource, or API endpoint is generating the query.

In production, pg_stat_statements is the safer diagnostic — it aggregates across all connections without per-query logging overhead. But in development, where datasets are small and the withCount penalty may not be immediately visible, DB::listen with EXPLAIN ANALYZE on flagged queries surfaces problems that would otherwise hide until the dataset grows.

The honest counterpoint: when withCount is acceptable

I have spent considerable effort demonstrating why withCount() is problematic on PostgreSQL. It would be a disservice to leave you with the impression that it is never appropriate. A waiter who overstates his case is no waiter at all.

withCount() is acceptable when:

  • The outer result set is small and bounded. If you are fetching a single post by ID — Post::withCount('comments')->findOrFail($id) — the correlated subquery executes exactly once. One execution of a simple COUNT is fast. The problem is multiplied execution, not single execution.
  • The related tables are small. If the comments table has 500 rows, each subquery execution takes microseconds regardless of strategy. The overhead of refactoring to JOINs or separate queries may not be justified.
  • The query is not on a hot path. An admin dashboard that loads once per session, with a dataset that grows slowly, may tolerate 200ms of withCount overhead for years without issue. The developer experience of withCount is genuinely excellent. If performance is acceptable and unlikely to degrade, the convenience has value.
  • You are ordering by a column on the parent table and using a LIMIT. While Postgres still evaluates all matching rows through the subqueries, if the total matching set is small (under 1,000 rows), the total time may be acceptable. Monitor it, set an alert at 500ms, and refactor when it triggers.

The danger is not using withCount(). The danger is using it without knowing its execution characteristics and without monitoring for degradation. Every withCount call is a correlated subquery that scales linearly with the outer result set. If you know the outer result set will remain small, the convenience is worth the cost. If you are uncertain, measure.

Choosing the right fix

Allow me to summarize the decision plainly.

  • Paginated list, ordered by created_at: LATERAL joins with a partial index (Fix 3). The subqueries execute only for the page size, not the full dataset. 4ms regardless of table growth.
  • Paginated list, ordered by count: Materialized count column (Fix 4). No other approach avoids evaluating the full dataset when the sort key is a computed value.
  • API endpoint returning counts for specific IDs: Separate COUNT queries (Fix 2). The IDs are already known, and each query hits an index. The simplest and fastest approach for bounded, known result sets.
  • Dashboard with aggregated counts across many relations: LEFT JOIN + GROUP BY (Fix 1) for two or three relations. Separate queries for four or more, to avoid the Cartesian trap.
  • Single model detail page: withCount() is fine. One outer row means one subquery execution. The convenience wins.

There is no single best approach. There is only the approach that matches your access pattern. The withCount method is a useful tool with a specific cost structure. Know the cost, and choose accordingly.

Where a query-aware proxy fits in

You can fix the withCount calls you know about. Refactor them to JOINs, split them into separate queries, or drop to raw LATERAL joins. The disciplined approach works for the queries you can find.

The concern is the ones you cannot find. The withCount buried in a resource class three layers deep, added eight months ago, working perfectly on the staging dataset of 200 posts. The one that surfaces when production crosses 10,000 rows and a page that loaded in 100ms starts timing out.

Or the ones that will be added next month by a new team member who reads the Eloquent documentation, finds withCount(), and uses it exactly as documented. The documentation is not wrong. It simply does not mention that the generated SQL has O(n) subquery execution characteristics on PostgreSQL.

Gold Lapel detects correlated subquery patterns at the PostgreSQL wire protocol level. The signature is unmistakable: a SELECT with embedded SELECT COUNT(*) subqueries referencing the outer table, executed with predictable regularity against growing datasets. Gold Lapel can materialize the aggregation as a continuously-refreshed materialized view — precomputing comments_count, likes_count, and tags_count per post — and transparently rewrite incoming queries to read from it.

The application sends the same withCount SQL. PostgreSQL reads three columns from a pre-computed, indexed view. The query that took 3.8 seconds takes 2 milliseconds. No query changes — just composer require goldlapel/goldlapel-laravel and your existing Eloquent code runs through Gold Lapel automatically. No migration. No deployment.

Gold Lapel also handles the indexing side: if the foreign key indexes on comments.post_id or likes.post_id are missing, it detects the sequential scans in the subqueries and creates the indexes automatically. The correlated subquery pattern becomes a trigger for both query rewriting and index creation — the two interventions that together eliminate the problem entirely.

Fix the withCount calls you can see. Add the indexes that should already exist. And for the patterns that will appear in the future — correlated subqueries, missing indexes, aggregations that outgrow their welcome — let the proxy handle what the ORM was never designed to optimize.

Frequently asked questions

Terms referenced in this article

The materialized view alternative mentioned in Fix 4 deserves a more thorough introduction than I could provide here. There is a chapter in the book devoted to materialized views in Laravel and PHP — covering creation, refresh strategies, and the write-aware invalidation that keeps your counts current without the correlated subquery cost.