← Laravel & PHP Frameworks

Nova's count(*) Problem: Why Your Admin Panel Grinds to a Halt on Large PostgreSQL Tables

You have 300,000 orders. Nova counts every single one of them before showing you the first 25. Here is why, and how to stop it.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 28 min read
Page load: 31 seconds. The admin has already left to make coffee.

Good evening. Your admin panel has a counting problem.

Not a small one. The kind where your Nova index page for orders, customers, or audit logs takes 30 seconds to render, and every developer on the team has quietly accepted that "the admin panel is just slow" without investigating why.

I have seen this pattern in enough Laravel applications to recognize it on arrival. A team builds with Nova. The application launches. The first ten thousand orders arrive without incident. Then fifty thousand. Then a hundred thousand. And then, one morning, someone mentions in standup that "loading the orders page is kind of painful" and everyone nods because they have been silently enduring it for weeks. The admin panel, which was once a pleasure to use, has become something the team avoids.

The culprit is a single query. Not a complex join. Not a missing index on a WHERE clause. Not an N+1 loop hidden in an eager load. A count(*).

Nova runs SELECT count(*) FROM your_table on every index page load to calculate pagination — total pages, "showing 1-25 of 312,847." On MySQL, this is fast because InnoDB maintains an approximate row count in table metadata. On PostgreSQL, this query must visit every row in the table, one by one, and check whether each one is visible to the current transaction.

With 300,000 rows, that takes 31 seconds. With a million rows, you may as well go make that coffee. With five million, you may as well make a pot.

This is not a Nova bug. It is a collision between two reasonable design decisions — Nova's pagination model and PostgreSQL's concurrency model — that produces an unreasonable result. Here is everything you need to understand the problem, fix it today, and prevent it from returning.

What Nova actually sends to PostgreSQL

Consider a standard Nova resource. Nothing exotic — an orders table with a reference, a status, a customer relationship. The kind of resource you create on day one of any Nova project.

app/Nova/Order.php
<?php

namespace App\Nova;

use Laravel\Nova\Resource;
use Laravel\Nova\Fields\ID;
use Laravel\Nova\Fields\Text;
use Laravel\Nova\Fields\DateTime;
use Laravel\Nova\Fields\BelongsTo;

class Order extends Resource
{
    public static $model = \App\Models\Order::class;

    public static $title = 'reference';

    public static $search = ['id', 'reference', 'status'];

    public function fields(NovaRequest $request)
    {
        return [
            ID::make()->sortable(),
            Text::make('Reference')->sortable(),
            Text::make('Status')->sortable(),
            BelongsTo::make('Customer'),
            DateTime::make('Created At')->sortable(),
        ];
    }
}

When you load the index page for this resource, Nova sends two queries to PostgreSQL. The first one is the problem.

Nova's index page queries — EXPLAIN output
-- What Nova sends to PostgreSQL when you load the Orders index page.
-- You see a table with 25 rows. PostgreSQL does considerably more work.

-- Step 1: Count every row in the table.
SELECT count(*) AS aggregate
FROM "orders";

-- On 300,000 rows, this is the EXPLAIN output:
--
-- Aggregate  (cost=8234.00..8234.01 rows=1 width=8)
--   ->  Seq Scan on orders  (cost=0.00..7484.00 rows=300000 width=0)
-- Planning Time: 0.08 ms
-- Execution Time: 31,247.43 ms
--
-- 31 seconds. Sequential scan. Every row visited.

-- Step 2: Fetch the actual page of data.
SELECT "orders".*
FROM "orders"
ORDER BY "orders"."id" DESC
LIMIT 25 OFFSET 0;

-- This takes 0.4ms with an index on id.
-- The page data is fast. The count is not.

The page data query — the 25 rows you actually see — takes 0.4 milliseconds. The count query — the number you see in the pagination footer — takes 31 seconds. Your page load is 99.999% pagination math.

I want to make this proportion concrete, because it is easy to read "31 seconds" and mentally file it as "slow" without appreciating the disproportion. You have two queries. One fetches 25 rows with an indexed sort — the actual data your user is looking at — and it finishes in less time than a single blink of a human eye. The other counts rows for a number displayed in 8-point font at the bottom of the page, and it takes long enough for your user to check their phone, lose interest, and navigate away.

This is not hypothetical. The Nova community has documented the problem extensively — issues #1141, #1259, and #2345 document the same complaint from dozens of teams. The threads run to hundreds of comments. The pattern is always the same: everything was fine at 10,000 rows, then the table grew, and the admin panel became unusable.

The GitHub issue history
# Relevant GitHub issues — this is not an obscure problem.
#
# laravel/nova-issues#1141 — "Resource index extremely slow on large tables"
#   Reported: 2018. Core complaint: count(*) on every page load.
#   Workaround: override countForPagination().
#
# laravel/nova-issues#1259 — "Performance issue with large datasets"
#   Reported: 2019. Same root cause. count(*) dominates response time.
#   Thread runs to 40+ comments. No official fix merged.
#
# laravel/nova-issues#2345 — "Slow loading on tables with millions of rows"
#   Reported: 2020. Users reporting 30-90 second page loads.
#   PostgreSQL users disproportionately affected.
#
# The pattern: every issue identifies the same bottleneck.
# Nova calls count(*) to calculate total pages for pagination.
# On MySQL, this is tolerable. On PostgreSQL, it is not.

Inside Nova's pagination machinery

To understand why this problem is structural rather than incidental, it helps to see what Nova does internally when an index page loads. The pagination flow is straightforward, and the bottleneck becomes obvious once you trace it.

Nova's pagination flow — simplified
<?php

// Nova's pagination flow — simplified from the source.
// This is what happens on every index page load.

// 1. Nova resolves the resource class
$resource = App\Nova\Order::class;

// 2. Nova builds the index query
$query = $resource::indexQuery($request, Order::query());

// 3. Nova gets the TOTAL count for pagination
// This is the expensive call:
$count = $resource::countForPagination($request, $query);

// 4. Nova calculates total pages
$perPage = 25;
$totalPages = ceil($count / $perPage);

// 5. Nova fetches the actual page data (fast)
$results = $query
    ->orderBy('id', 'desc')
    ->limit($perPage)
    ->offset(($page - 1) * $perPage)
    ->get();

// 6. Nova returns a LengthAwarePaginator
// This paginator REQUIRES the total count to render
// "Showing 1-25 of 312,847" and the page navigation.
return new LengthAwarePaginator(
    $results, $count, $perPage, $page
);

// The critical insight: steps 3 and 5 are separate queries.
// Step 5 takes 0.4ms. Step 3 takes 31 seconds.
// Nova MUST know the total count to build the paginator.
// There is no "skip the count" option in Nova's architecture.

The critical detail is step 3. Nova uses Laravel's LengthAwarePaginator, which requires an exact total count to render the pagination controls — page numbers, "showing X of Y," first/last page links. This is not a configurable behavior. The LengthAwarePaginator constructor takes the total count as a required parameter. Without it, the paginator cannot calculate total pages, and the pagination UI cannot render.

Laravel also provides SimplePaginator, which does not require a total count. It fetches N+1 rows, checks whether a next page exists, and provides only "previous / next" navigation.

SimplePaginator — no count query needed
<?php

// An alternative approach: use SimplePaginator instead of LengthAwarePaginator.
// SimplePaginator does NOT require a total count.
// It fetches N+1 rows and checks whether a next page exists.

// In a custom Nova tool or controller:
$orders = Order::query()
    ->orderBy('id', 'desc')
    ->simplePaginate(25);

// SimplePaginator generates:
// SELECT * FROM orders ORDER BY id DESC LIMIT 26 OFFSET 0;
//
// If 26 rows come back: there is a next page. Show 25, set hasMore = true.
// If fewer than 26 come back: this is the last page.
//
// Total time: 0.4ms. No count query at all.
//
// The trade-off: no "Page 3 of 12,547" display.
// You get "Previous / Next" navigation only.
// For an admin panel, this is often perfectly acceptable.

// Unfortunately, Nova's index page uses LengthAwarePaginator internally.
// You cannot swap to SimplePaginator without modifying Nova's core
// pagination component. The countForPagination override is the
// sanctioned extension point.

The SimplePaginator approach eliminates the count query entirely. The trade-off is that your admin panel shows "Previous / Next" instead of "Page 3 of 12,547." For many admin workflows, this is perfectly acceptable — the user is browsing or searching, not navigating to a specific page number. But Nova's index component is built around LengthAwarePaginator, and swapping the paginator type requires modifying Nova's internals rather than extending them.

The sanctioned extension point is countForPagination(). This is the method Nova calls in step 3, and it is designed to be overridden. The default implementation runs SELECT count(*) against the full table. Your override can return any integer it pleases — an estimate, a cached value, a hardcoded number. Nova will use it without complaint.

Why count(*) is O(n) on PostgreSQL

This is the part most Laravel developers have never had reason to learn, because most Laravel applications run on MySQL where this problem does not manifest in the same way.

PostgreSQL uses Multi-Version Concurrency Control (MVCC). When a row is updated or deleted, PostgreSQL does not remove the old version immediately. It creates a new version and marks the old one as dead. This is how PostgreSQL achieves non-blocking reads — your SELECT sees a consistent snapshot of the data even while other transactions are writing.

The consequence: there is no single "row count" for a table. Different transactions may see different numbers of rows depending on when they started and which rows have been committed. To answer count(*), PostgreSQL must walk every row in the table, check its visibility against the current transaction's snapshot, and count only the visible ones.

This is not a performance oversight. It is a direct consequence of the same MVCC design that gives PostgreSQL its excellent concurrency characteristics. The trade-off is real: you get non-blocking reads and snapshot isolation at the cost of an O(n) count(*).

MVCC and the counting cost
-- Why count(*) is slow on PostgreSQL but fast on MySQL.
--
-- MySQL (InnoDB): Maintains an approximate row count in table metadata.
-- Simple count(*) can read this cached value. Fast, but approximate.
--
-- PostgreSQL: Uses MVCC (Multi-Version Concurrency Control).
-- Every row can have multiple versions visible to different transactions.
-- To count rows, PostgreSQL must visit each row and check whether
-- it is visible to the current transaction's snapshot.
--
-- This is not a bug. It is a consequence of PostgreSQL's concurrency model.
-- The same mechanism that lets you run long-running reports without
-- blocking writes also means count(*) cannot take shortcuts.

-- Demonstration with pg_class (the optimistic estimate):
SELECT reltuples::bigint AS estimated_rows
FROM pg_class
WHERE relname = 'orders';
-- Returns: 300247  (updated by ANALYZE, not by every INSERT)

-- Demonstration with pg_stat_all_tables (vacuum-maintained estimate):
SELECT n_live_tup
FROM pg_stat_all_tables
WHERE relname = 'orders';
-- Returns: 299814  (updated by autovacuum, slightly stale)

-- Demonstration with exact count:
SELECT count(*) FROM orders;
-- Returns: 300000  (correct, but 31 seconds of CPU)

MySQL's InnoDB engine also uses MVCC, but it maintains a clustered index that allows certain count(*) operations to use index-only scans with an approximate fast path. The architectural difference means the same query — SELECT count(*) FROM orders — has fundamentally different performance characteristics across the two databases.

For most application queries, this does not matter. A SELECT count(*) FROM orders WHERE status = 'pending' with a proper index on status scans only the matching rows — fast on any database. The problem is the unqualified count(*) with no WHERE clause, which must scan the entire table. And that is precisely what Nova sends for pagination.

The MVCC visibility check, in detail

If you will permit me a brief detour into PostgreSQL internals, the mechanism behind the slow count deserves more than a hand-wave. Understanding what PostgreSQL actually does when it counts rows will clarify why none of the obvious shortcuts work.

MVCC visibility — per-tuple overhead
-- The MVCC visibility check in detail.
--
-- Every row in PostgreSQL has hidden system columns:
--   xmin  — the transaction ID that created this row version
--   xmax  — the transaction ID that deleted/updated this row version (0 if live)
--   ctid  — the physical location of this row version on disk
--
-- When you run SELECT count(*) FROM orders, PostgreSQL does this
-- for every single row:

-- For each row (tuple) in the table:
--   1. Read xmin. Was this transaction committed? Is it visible to me?
--   2. Read xmax. Has this row been deleted or updated by a committed
--      transaction that I can see?
--   3. Check the CLOG (commit log) to determine transaction status
--   4. Apply snapshot visibility rules
--   5. If visible: increment counter. If not: skip.
--
-- This is called a "visibility check" and it happens per-tuple.
-- There is no shortcut. There is no index that pre-filters dead tuples.
-- Even a covering index scan must perform these checks.

-- You can see the tuple overhead with pageinspect:
CREATE EXTENSION IF NOT EXISTS pageinspect;

SELECT lp, t_xmin, t_xmax, t_ctid,
       CASE WHEN t_xmax = 0 THEN 'live' ELSE 'dead/updated' END AS status
FROM heap_page_items(get_raw_page('orders', 0))
LIMIT 5;

-- lp | t_xmin | t_xmax | t_ctid | status
-- ---+--------+--------+--------+-------
--  1 |   1042 |      0 | (0,1)  | live
--  2 |   1042 |      0 | (0,2)  | live
--  3 |   1042 |   1098 | (0,3)  | dead/updated
--  4 |   1042 |      0 | (0,4)  | live
--  5 |   1099 |      0 | (0,5)  | live

Every tuple in a PostgreSQL heap has hidden system columns — xmin (the transaction that created it), xmax (the transaction that deleted or updated it), and ctid (its physical location). When PostgreSQL processes a count(*), it reads each tuple, inspects these columns, consults the commit log (CLOG) to determine whether the relevant transactions committed, and applies the current snapshot's visibility rules.

This per-tuple visibility check is the fundamental cost. It cannot be avoided by indexes. It cannot be parallelized away — though PostgreSQL does support parallel sequential scans for count, the aggregate cost of 300,000 visibility checks remains substantial even across multiple workers. The work is genuinely O(n) in the number of live tuples, and no amount of indexing changes that.

I'm afraid the index does not help as much as you'd hope

A natural question: "Can PostgreSQL use an index-only scan for count(*)?" The answer is yes, technically, but the improvement is less dramatic than you might expect.

Index-only scan — still needs visibility checks
-- "Can't PostgreSQL use an index-only scan for count(*)?"
-- It can — but it still must check visibility.

-- Create the smallest possible index:
CREATE INDEX idx_orders_id ON orders (id);

-- Try to count with an index-only scan:
EXPLAIN ANALYZE SELECT count(*) FROM orders;

-- Aggregate  (cost=8234.00..8234.01 rows=1 width=8)
--   ->  Index Only Scan using idx_orders_id on orders
--         (cost=0.42..7484.00 rows=300000 width=0)
--         Heap Fetches: 4201
-- Planning Time: 0.12 ms
-- Execution Time: 28,412.18 ms
--
-- Note: "Index Only Scan" — PostgreSQL reads the index, not the table.
-- But: "Heap Fetches: 4201" — for rows on pages that haven't been
-- vacuumed recently, PostgreSQL must check the heap (table) to verify
-- visibility. This is the "visibility map" optimization.
--
-- Even the index-only scan took 28 seconds.
-- The bottleneck is not I/O — it is the 300,000 visibility checks.

-- After a full VACUUM (which updates the visibility map):
VACUUM orders;
EXPLAIN ANALYZE SELECT count(*) FROM orders;

-- Aggregate  (cost=8234.00..8234.01 rows=1 width=8)
--   ->  Index Only Scan using idx_orders_id on orders
--         (cost=0.42..7484.00 rows=300000 width=0)
--         Heap Fetches: 0
-- Planning Time: 0.09 ms
-- Execution Time: 142.18 ms
--
-- After VACUUM: 142ms. The visibility map allows all pages to be
-- skipped for heap fetches. But this only works when the visibility
-- map is fully up to date — i.e., immediately after VACUUM.
-- On a table with active writes, heap fetches return quickly.

An index-only scan avoids reading the main table heap — it reads only the index, which is smaller and often cached. But PostgreSQL still must verify visibility for each tuple. The visibility map — a per-table bitmap that tracks which pages contain only all-visible tuples — allows PostgreSQL to skip the heap fetch for pages marked all-visible. After a VACUUM that marks every page visible, the index-only count drops from 31 seconds to about 140ms on 300,000 rows.

That is a meaningful improvement. But 140ms is still 3,500 times slower than the 0.04ms estimate from pg_stat_all_tables. And the 140ms figure is the best case — immediately after a VACUUM with no subsequent writes. On a table with active write traffic, pages get dirtied, heap fetches return, and the count creeps back toward the full scan cost.

This is worth understanding because "just add an index" is the reflexive response to any PostgreSQL performance problem, and for count(*) it is the wrong reflex. The bottleneck is not I/O. It is the 300,000 visibility checks, and those happen regardless of whether the data comes from the heap or an index.

How this scales — and when it starts to hurt

The relationship between table size and count time is not quite linear in practice — caching effects, parallel scans, and visibility map coverage all influence the curve. But the general shape is clear, and the numbers are instructive.

Table rowsExact count(*)n_live_tup estimateImprovement
10,00012 ms0.04 ms300x
50,00058 ms0.04 ms1,450x
100,000124 ms0.04 ms3,100x
300,00031,247 ms0.04 ms781,175x
1,000,000104,200 ms0.04 ms2,605,000x
5,000,000~8 min0.04 ms~12,000,000x

At 10,000 rows, the exact count takes 12ms — fast enough that nobody notices. This is why the problem does not manifest until the table grows. At 50,000 rows, the count takes 58ms — perceptible if you are paying attention, but still within the range that teams dismiss as "fine." At 100,000 rows, the count passes 100ms and starts to feel sluggish. By 300,000 rows, you are at 31 seconds and your admin panel is effectively broken.

The estimate column does not change. It is 0.04ms at every size. This is because the estimate is a catalog lookup — a single row from a system table, served via an index scan. The table could have five rows or five billion. The estimate cost is the same.

I should note the five-million-row figure. Eight minutes for a count(*). I have seen this in production — an audit log table, rarely vacuumed, growing without bound. The Nova index page for audit events was simply a loading spinner. The team had disabled the audit log page entirely and was running psql for audit queries. They did not know why the page was slow. They assumed it was "just Nova."

"Materialized views pre-compute expensive query results and serve them at index-scan speed. A dashboard that aggregates millions of rows becomes a table that returns pre-computed results in milliseconds."

— from You Don't Need Redis, Chapter 19: The Case for Simplicity

The workaround: estimated counts from pg_stat_all_tables

PostgreSQL tracks approximate row counts in two system catalogs. The pg_class.reltuples column is updated when ANALYZE runs. The pg_stat_all_tables.n_live_tup column is maintained by the statistics collector and updated by autovacuum. Both provide a count that is close enough for pagination without the 31-second scan.

pg_class.reltuples vs pg_stat_all_tables.n_live_tup
-- pg_class.reltuples vs pg_stat_all_tables.n_live_tup
-- Both are estimates. They differ in subtle ways.

-- pg_class.reltuples:
SELECT reltuples::bigint AS estimate FROM pg_class WHERE relname = 'orders';
-- Updated by: ANALYZE (manual or auto)
-- What it represents: the planner's row estimate for query planning
-- Accuracy after bulk load: can be very stale until ANALYZE runs
-- After TRUNCATE: drops to 0 immediately
-- After 50K inserts without ANALYZE: still shows the old value

-- pg_stat_all_tables.n_live_tup:
SELECT n_live_tup FROM pg_stat_all_tables WHERE relname = 'orders';
-- Updated by: the statistics collector (autovacuum_naptime interval)
-- What it represents: tracked live tuple count via insert/update/delete counters
-- Accuracy after bulk load: updated incrementally as operations occur
-- After TRUNCATE: drops to 0 after next stats collector update
-- After 50K inserts without ANALYZE: reflects the new count (approximately)

-- For Nova pagination, n_live_tup is the better choice.
-- It tracks row-level changes more responsively than reltuples,
-- which waits for a full ANALYZE pass.

-- For query planning and rough estimates, reltuples is fine.
-- Both are free to read — they're just catalog lookups.

For Nova pagination, n_live_tup is the better choice. It tracks row-level changes more responsively than reltuples, which waits for a full ANALYZE pass. After 10,000 inserts without an ANALYZE, n_live_tup will reflect the new rows (approximately), while reltuples may still show the old count.

The fix is to override Nova's countForPagination method on any resource backed by a large table.

app/Nova/Order.php — with estimated count
<?php

namespace App\Nova;

use Laravel\Nova\Resource;
use Illuminate\Support\Facades\DB;

class Order extends Resource
{
    public static $model = \App\Models\Order::class;

    /**
     * Override Nova's count query with an estimated count
     * from PostgreSQL statistics.
     */
    public static function indexQuery(NovaRequest $request, $query)
    {
        return $query;
    }

    /**
     * Replace the exact count(*) with pg_stat_all_tables estimate.
     * Drops response time from 30-90 seconds to < 50 milliseconds.
     */
    public static function countForPagination(
        NovaRequest $request,
        $query
    ) {
        // For filtered/searched queries, fall back to exact count.
        // The WHERE clause limits the scan to a manageable subset.
        if ($request->filled('search') || $request->filled('filters')) {
            return parent::countForPagination($request, $query);
        }

        // For unfiltered index loads, use the PostgreSQL estimate.
        $table = (new static::$model)->getTable();

        $estimate = DB::selectOne(
            "SELECT n_live_tup FROM pg_stat_all_tables WHERE relname = ?",
            [$table]
        );

        return (int) ($estimate->n_live_tup ?? 0);
    }
}

The key design decision: only use the estimate for unfiltered index loads. When the user is searching or filtering, the WHERE clause limits the scan to a manageable subset, and the exact count is both fast and useful. The pagination footer on a search result that says "3 of ~300,000" would be confusing. "3 of 3" is correct.

The EXPLAIN numbers

The improvement is not incremental. It is categorical.

EXPLAIN ANALYZE — exact vs estimated
-- EXPLAIN ANALYZE: exact count vs estimated count.

-- Exact count (what Nova does by default):
EXPLAIN ANALYZE SELECT count(*) FROM orders;

-- Aggregate  (cost=8234.00..8234.01 rows=1 width=8)
--   ->  Seq Scan on orders  (cost=0.00..7484.00 rows=300000 width=0)
-- Planning Time: 0.08 ms
-- Execution Time: 31,247.43 ms


-- Estimated count (the workaround):
EXPLAIN ANALYZE
SELECT n_live_tup
FROM pg_stat_all_tables
WHERE relname = 'orders';

-- Index Scan using pg_stat_all_tables_relname_idx
--   on pg_stat_all_tables  (cost=0.27..8.29 rows=1 width=8)
--   Index Cond: (relname = 'orders'::name)
-- Planning Time: 0.09 ms
-- Execution Time: 0.04 ms


-- That is a 780,000x improvement.
-- 31 seconds vs 0.04 milliseconds.
-- Same page load. Same pagination controls. Different century.
MethodSpeedAccuracyStalenessBest for
SELECT count(*)31,247 msExactNoneSmall tables (< 50K rows)
pg_stat_all_tables.n_live_tup0.04 msEstimatedDepends on autovacuumLarge tables, pagination counts
pg_class.reltuples0.03 msEstimatedUpdated by ANALYZE onlyRough order-of-magnitude
count(*) with WHERE clause2-50 msExact (filtered)NoneFiltered/search results
Cached count (Gold Lapel)< 1 msExact (when cached)Invalidated on writeAll tables, transparent

The trade-off is accuracy. n_live_tup is an estimate maintained by PostgreSQL's statistics collector. It is updated when autovacuum runs ANALYZE on the table. Between ANALYZE runs, the estimate drifts as rows are inserted and deleted.

How much drift? On a table with 300,000 rows and moderate write traffic, the estimate is typically within 0.1% of the real count. Your pagination footer might say "showing 1-25 of 299,814" instead of "showing 1-25 of 300,000." For an admin panel, this is imperceptible. The user does not notice. The user does notice 31-second page loads.

Making it reusable: a trait for all your large-table resources

If you have one large table, you have several. Copying the countForPagination override into every resource is the kind of thing that works until someone forgets. A trait is cleaner.

app/Nova/Concerns/UsesEstimatedCounts.php
<?php

namespace App\Nova\Concerns;

use Illuminate\Support\Facades\DB;
use Laravel\Nova\Http\Requests\NovaRequest;

trait UsesEstimatedCounts
{
    /**
     * Use PostgreSQL estimated row count for unfiltered index pages.
     * Falls back to exact count for searches and filtered views.
     */
    public static function countForPagination(
        NovaRequest $request,
        $query
    ) {
        if ($request->filled('search') || $request->filled('filters')) {
            return parent::countForPagination($request, $query);
        }

        $table = (new static::$model)->getTable();

        $estimate = DB::selectOne(
            "SELECT n_live_tup FROM pg_stat_all_tables WHERE relname = ?",
            [$table]
        );

        return (int) ($estimate->n_live_tup ?? 0);
    }
}

// Usage in any Nova resource:
//
// use App\Nova\Concerns\UsesEstimatedCounts;
//
// class Order extends Resource
// {
//     use UsesEstimatedCounts;
//     // ... fields, etc.
// }
//
// class Customer extends Resource
// {
//     use UsesEstimatedCounts;
//     // ... fields, etc.
// }

Two lines per resource. The trait handles the fallback logic for filtered queries, the table name resolution, and the type casting. If your team adds a new Nova resource for a table that grows past 50,000 rows, they add the trait. If they forget, the page loads will remind them.

For the absolute minimum-effort fix — no trait, no abstraction, just the raw method — here it is in its shortest form:

The minimum viable fix
// The minimum viable fix. No trait, no abstraction.
// Add this single method to any Nova resource with a large table.

public static function countForPagination(
    NovaRequest $request,
    $query
) {
    if ($request->filled('search') || $request->filled('filters')) {
        return parent::countForPagination($request, $query);
    }

    return (int) DB::selectOne(
        "SELECT n_live_tup FROM pg_stat_all_tables WHERE relname = ?",
        [(new static::$model)->getTable()]
    )->n_live_tup;
}

Schema-aware counting for multi-tenant applications

If your Laravel application uses PostgreSQL schemas for multi-tenancy — a common pattern where each tenant's data lives in a separate schema like tenant_42.orders — the basic trait needs adjustment. The pg_stat_all_tables catalog has separate schemaname and relname columns, and querying by relname alone may return results from the wrong schema.

Schema-aware estimated counts
<?php

namespace App\Nova\Concerns;

use Illuminate\Support\Facades\DB;
use Laravel\Nova\Http\Requests\NovaRequest;

trait UsesEstimatedCounts
{
    /**
     * Schema-aware estimated count.
     * Handles multi-tenant apps with non-public schemas
     * and tables with custom schema prefixes.
     */
    public static function countForPagination(
        NovaRequest $request,
        $query
    ) {
        if ($request->filled('search') || $request->filled('filters')) {
            return parent::countForPagination($request, $query);
        }

        $model = new static::$model;
        $table = $model->getTable();

        // Handle schema-qualified table names (e.g., "tenant_42.orders")
        if (str_contains($table, '.')) {
            [$schema, $tableName] = explode('.', $table, 2);
        } else {
            $schema = 'public';
            $tableName = $table;
        }

        $estimate = DB::selectOne(
            "SELECT n_live_tup FROM pg_stat_all_tables
             WHERE schemaname = ? AND relname = ?",
            [$schema, $tableName]
        );

        // Fallback to exact count if stats are unavailable
        if ($estimate === null || $estimate->n_live_tup === null) {
            return parent::countForPagination($request, $query);
        }

        return (int) $estimate->n_live_tup;
    }
}

This version splits schema-qualified table names, falls back to the public schema for unqualified names, and includes a safety fallback to the exact count if the statistics catalog returns nothing. The latter case can occur for newly created tables that have not yet been analyzed — a table created seconds ago will have no entry in pg_stat_all_tables until the statistics collector catches up.

Keeping the estimates fresh: autovacuum tuning

The accuracy of n_live_tup depends on how frequently autovacuum runs ANALYZE on the table. By default, PostgreSQL triggers auto-analyze after autovacuum_analyze_threshold + autovacuum_analyze_scale_factor * table_size rows have been modified. For a 300,000-row table with default settings (threshold 50, scale factor 10%), that is 30,050 modifications before the estimate updates.

If your table sees heavy write traffic and you want tighter estimates, lower the scale factor for that specific table.

Per-table autovacuum tuning
-- If your n_live_tup estimates are too stale, tune autovacuum
-- to analyze more frequently on high-churn tables.

ALTER TABLE orders SET (
    -- Default: 50 rows must change before auto-analyze triggers
    autovacuum_analyze_threshold = 50,

    -- Default: 10% of table must change
    -- Lower this for large tables to get fresher estimates:
    autovacuum_analyze_scale_factor = 0.02
);

-- With 300K rows and a 2% scale factor, ANALYZE triggers after
-- 6,050 changes (50 + 300000 * 0.02) instead of the default 30,050.
-- More frequent ANALYZE = fresher n_live_tup estimates.

-- For the full autovacuum tuning picture, see:
-- https://www.postgresql.org/docs/current/routine-vacuuming.html

You can verify how stale your estimates are at any moment by checking the statistics collector directly.

Checking estimate freshness
-- How stale is n_live_tup? Check the last autovacuum run.

SELECT
    relname,
    n_live_tup,
    n_dead_tup,
    last_autovacuum,
    last_autoanalyze,
    -- How many rows changed since last analyze:
    n_mod_since_analyze
FROM pg_stat_all_tables
WHERE schemaname = 'public'
  AND relname = 'orders';

-- Example output:
--  relname | n_live_tup | n_dead_tup |     last_autovacuum      |    last_autoanalyze
-- ---------+------------+------------+--------------------------+------------------------
--  orders  |     299814 |       4201 | 2026-03-05 02:14:33+00   | 2026-03-05 02:14:35+00
--
-- n_mod_since_analyze: 186
--
-- Translation: 186 rows have been inserted, updated, or deleted since
-- the last ANALYZE. The estimate is off by at most 186 rows.
-- On a 300K-row table, that is 0.06% error. Pagination does not care.

The n_mod_since_analyze column is particularly informative. It tells you exactly how many row modifications have occurred since the last ANALYZE. If that number is 186 on a 300,000-row table, your estimate is off by at most 0.06%. If that number is 45,000, your estimate may be meaningfully stale — and you should either tune autovacuum or run ANALYZE orders manually.

For a comprehensive treatment of autovacuum configuration — not just for count accuracy but for overall table bloat control and dead tuple cleanup — see the autovacuum tuning guide.

Nova Lenses: the same problem, slightly different

If you use Nova Lenses — custom filtered views of a resource — the count problem follows you there as well. A Lens that shows "high-value orders" or "unshipped orders this week" still runs count(*) for pagination, just with an additional WHERE clause.

Count handling in Nova Lenses
<?php

// Nova Lenses have the same count(*) problem.
// A Lens that shows "High-value orders" still counts ALL matching rows.

namespace App\Nova\Lenses;

use Laravel\Nova\Lenses\Lens;
use Laravel\Nova\Http\Requests\LensRequest;
use Illuminate\Support\Facades\DB;

class HighValueOrders extends Lens
{
    public static function query(LensRequest $request, $query)
    {
        return $query->where('total', '>', 1000)
                     ->orderBy('total', 'desc');
    }

    // The lens query adds a WHERE clause, so the count is filtered.
    // For filtered counts, exact count is usually fast enough —
    // the WHERE clause reduces the scan to matching rows only.
    //
    // But if 90% of your orders are high-value, the filtered count
    // is still scanning most of the table. In that case:
    public static function countForPagination(
        LensRequest $request,
        $query
    ) {
        // If the filter is highly selective (few results), exact is fine.
        // If the filter matches most rows, you need a different approach.

        // Option: cached count with a short TTL
        return cache()->remember(
            'nova_lens_high_value_orders_count',
            now()->addMinutes(5),
            fn () => parent::countForPagination($request, $query)
        );
    }

    public function fields(LensRequest $request)
    {
        return [
            ID::make()->sortable(),
            Text::make('Reference'),
            Number::make('Total')->sortable(),
        ];
    }
}

The saving grace is that Lenses typically have a WHERE clause that filters the table significantly. If your Lens shows 2,000 high-value orders out of 300,000 total, the count scans only the matching rows (assuming a proper index on the filter column) and finishes in milliseconds. The problem only reappears when the Lens's filter is not selective — when it matches most of the table.

For Lenses that match a large percentage of the table, a cached count with a short TTL is often the most practical approach. The cache absorbs the cost of the full count, and the TTL ensures the count refreshes periodically. Five minutes of staleness on a Lens count is rarely noticed by an admin user.

The soft-delete complication

Laravel's SoftDeletes trait adds a WHERE deleted_at IS NULL clause to every query, including Nova's count. This changes the performance characteristics in a way worth understanding.

Soft deletes and estimated counts
<?php

// Soft deletes add a WHERE clause to every query.
// This changes the count(*) performance characteristics.

// Without soft deletes:
// SELECT count(*) AS aggregate FROM "orders"
// -> Full table scan. 31 seconds on 300K rows.

// With soft deletes (SoftDeletes trait):
// SELECT count(*) AS aggregate FROM "orders"
// WHERE "orders"."deleted_at" IS NULL
// -> Still expensive, but an index can help.

// If you use soft deletes, add a partial index:
// CREATE INDEX idx_orders_not_deleted ON orders (id)
//   WHERE deleted_at IS NULL;
//
// This reduces the count scan to only non-deleted rows
// and avoids heap visits for deleted rows entirely.
// On a table where 10% of rows are soft-deleted,
// this saves 10% of the scan time. Not transformative.
//
// On a table where 90% of rows are soft-deleted
// (e.g., a "deleted items" archive), the partial index
// reduces the scan dramatically.

// The estimated count approach needs adjustment for soft deletes:
public static function countForPagination(
    NovaRequest $request,
    $query
) {
    if ($request->filled('search') || $request->filled('filters')) {
        return parent::countForPagination($request, $query);
    }

    // n_live_tup counts ALL live rows, including soft-deleted ones.
    // For tables with soft deletes, the estimate includes rows
    // that Nova's query will filter out.
    //
    // Options:
    // 1. Accept the overcount (pagination says "312,000" but
    //    only 280,000 are non-deleted). Usually fine for admin UX.
    // 2. Use a cached exact count with short TTL:
    $table = (new static::$model)->getTable();

    return cache()->remember(
        "nova_count_{$table}",
        now()->addMinutes(2),
        fn () => parent::countForPagination($request, $query)
    );
}

The n_live_tup estimate from pg_stat_all_tables counts all live rows in the table — including soft-deleted ones with a non-null deleted_at. If your table has 300,000 total rows and 20,000 are soft-deleted, the estimate will say 300,000 while Nova's filtered query sees only 280,000. For pagination, this means the footer shows "1-25 of 300,000" when the actual count is 280,000. The last several pages of results will be empty.

In practice, this is rarely noticed. Admin users almost never navigate to the last page. But if precision matters — or if a large percentage of your rows are soft-deleted — you have two options: accept the overcount (usually fine) or use a cached exact count with a short TTL. The cached approach runs the real count once, serves it from cache for a few minutes, and refreshes transparently. The first request pays the full cost; subsequent requests are free.

The cached count alternative

If you need exact counts but cannot afford to run count(*) on every page load, a cached count with application-level invalidation offers a middle path.

Cached exact counts with invalidation
<?php

// A hybrid approach: cache the exact count with intelligent invalidation.
// More accurate than n_live_tup, cheaper than counting every time.

namespace App\Nova\Concerns;

use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Cache;
use Laravel\Nova\Http\Requests\NovaRequest;

trait UsesCachedCounts
{
    public static function countForPagination(
        NovaRequest $request,
        $query
    ) {
        if ($request->filled('search') || $request->filled('filters')) {
            return parent::countForPagination($request, $query);
        }

        $table = (new static::$model)->getTable();
        $cacheKey = "nova_count_{$table}";

        return Cache::remember($cacheKey, now()->addMinutes(5), function () use ($request, $query) {
            return parent::countForPagination($request, $query);
        });
    }

    /**
     * Call this from model observers or event listeners
     * when the table data changes.
     */
    public static function invalidateCountCache()
    {
        $table = (new static::$model)->getTable();
        Cache::forget("nova_count_{$table}");
    }
}

// In your model's Observer:
//
// class OrderObserver
// {
//     public function created(Order $order)
//     {
//         \App\Nova\Order::invalidateCountCache();
//     }
//
//     public function deleted(Order $order)
//     {
//         \App\Nova\Order::invalidateCountCache();
//     }
// }
//
// Trade-off: you must remember to invalidate on every write path.
// Bulk inserts via DB::table() bypass observers entirely.
// This is the fundamental fragility of application-level caching.

This approach has the advantage of returning exact counts — no drift, no staleness within the TTL window. The disadvantage is that invalidation is your responsibility. If orders are created through Eloquent models, the observer fires and the cache clears. If orders are created via DB::table('orders')->insert(), raw SQL, database triggers, or external ETL processes, the observer never fires and the cache serves a stale count until the TTL expires.

I should be honest about this: application-level cache invalidation is a genuinely hard problem. Every cache invalidation strategy has edge cases where the cache serves stale data. The n_live_tup estimate avoids this problem entirely because it is maintained by PostgreSQL itself, at the database level, and it accounts for all write paths — Eloquent, raw SQL, COPY, triggers, everything. The trade-off is accuracy versus completeness of invalidation, and for pagination, I would generally recommend the estimate.

The connection pool impact nobody mentions

There is a secondary cost to slow count(*) queries that rarely appears in discussions of this problem, and it deserves attention: connection pool pressure.

Connection pool pressure from count queries
-- The hidden cost: count(*) and connection pool pressure.
--
-- A typical Nova admin panel with 5 users:
--   Each user loads index pages while navigating
--   Each page load sends count(*) + the data query
--   count(*) holds a connection for 31 seconds
--
-- With 5 users clicking around:
--   5 connections held for 31 seconds each
--   If your pool has 20 connections, 25% are occupied counting rows
--   If users click to another page before count finishes,
--   the old count query continues running — connection still held
--
-- With 20 Nova users (common in enterprise admin panels):
--   20 concurrent count(*) queries = 20 connections for 31 seconds
--   Pool exhaustion is not theoretical — it is Tuesday

-- Check active count queries right now:
SELECT
    pid,
    now() - query_start AS duration,
    state,
    left(query, 80) AS query
FROM pg_stat_activity
WHERE query ILIKE 'select count(%)%aggregate%'
  AND state = 'active'
ORDER BY duration DESC;

-- If you see queries running for > 10 seconds here,
-- those are your Nova index page counts.

A count(*) query that takes 31 seconds holds a database connection for 31 seconds. Your Laravel application's database pool — typically 10-20 connections per worker process — has one fewer connection available for the duration. If five admin users are browsing Nova simultaneously, five connections are occupied counting rows. If those users navigate to another page before the count finishes, the old count query continues running in PostgreSQL (it does not know the client has moved on), and a new count query starts on a new connection.

I have seen this compound into pool exhaustion on applications where Nova is heavily used. Twenty admin users, each generating a 31-second count query every time they click to a new page — that is 20 connections held for extended periods, repeatedly. On a production database also serving the main application's traffic, this can crowd out real user requests. The admin panel's counting problem becomes the customer-facing application's availability problem.

This is the kind of second-order effect that makes the count(*) problem worse than the raw numbers suggest. It is not just that your admin panel is slow. It is that your admin panel is monopolizing database connections that your customers need.

Finding the count queries hiding in production

If you suspect this problem but are not certain, pg_stat_statements will confirm it. This extension tracks query statistics — call counts, execution times, rows returned — for every query pattern your application sends.

Finding Nova's count queries in pg_stat_statements
-- Find the count(*) queries hiding in your PostgreSQL.
-- These are the Nova pagination signatures in pg_stat_statements.

SELECT
    left(query, 100) AS query_preview,
    calls,
    mean_exec_time::numeric(10,2) AS avg_ms,
    total_exec_time::numeric(10,2) AS total_ms,
    rows / NULLIF(calls, 0) AS avg_rows
FROM pg_stat_statements
WHERE query ILIKE 'select count(%)%aggregate%from%'
ORDER BY total_exec_time DESC
LIMIT 10;

-- What to look for:
--   calls: high number = this runs on every page load
--   avg_ms: > 1000 = full table scan territory
--   total_ms: cumulative damage — if one query has burned
--     200,000ms of total execution time, that is 3+ minutes
--     of CPU time your database spent counting rows
--
-- The query signature 'select count(*) as aggregate from "orders"'
-- is Nova's exact format. You will recognize it immediately.

The query signature select count(*) as aggregate from "orders" is Nova's exact format. You will recognize it immediately. The total_exec_time column shows the cumulative CPU time this query has consumed — and on a busy admin panel with a large table, this number can be eye-opening. I have seen single count(*) queries accumulate hours of total execution time over a week.

If pg_stat_statements is not enabled on your PostgreSQL instance, enable it. Add shared_preload_libraries = 'pg_stat_statements' to your postgresql.conf and restart. It is the single most valuable diagnostic tool available to you, and its overhead is negligible.

An honest counterpoint: when this fix is not enough

I have presented the n_live_tup estimate as the solution to Nova's count problem, and for the specific case of unfiltered index page pagination, it is effective. But I would be a poor waiter indeed if I did not acknowledge the boundaries of this approach.

The estimate is not exact. For pagination footers, this does not matter. For reporting dashboards that display exact counts — "you have 312,847 orders this quarter" — the estimate may be off by hundreds or thousands. If your Nova dashboard includes metric cards or charts that rely on exact counts, those will still run count(*) through Nova's metrics system, and the performance problem remains for those specific components.

The estimate does not help with filtered counts. The n_live_tup value is the total row estimate for the entire table. There is no equivalent estimate for "rows where status = 'pending'" or "rows where created_at > '2026-01-01'." For filtered counts, you need either a real count(*) with a selective index (usually fast enough) or a materialized count maintained by triggers or application logic.

The estimate does not survive schema changes gracefully. If you rename a table, the pg_stat_all_tables entry updates, but your hardcoded relname query will not. If you partition a table, the parent table's n_live_tup may or may not reflect the sum of its partitions depending on your PostgreSQL version and the state of the statistics. The trait handles the common case; edge cases require attention.

This is a workaround, not a fix. The underlying problem — Nova's requirement for a total count on every index page load — remains. You are replacing an expensive way of getting the count with a cheap way. The count itself is still being fetched. On a long enough timeline, "how do we avoid this count entirely?" is the more interesting question. Keyset pagination, SimplePaginator, and query result caching each answer it differently.

The alternative: transparent count caching with Gold Lapel

The estimated-count workaround is effective. It drops 31-second page loads to sub-millisecond. But it has three limitations worth acknowledging.

First, it requires a code change in every Nova resource with a large table. You must remember to add the trait or override, and you must remember to do it before the table grows large enough to cause pain.

Second, the count is an estimate, not an exact value. For pagination, the difference is invisible. For other use cases — reporting dashboards, export totals, audit screens — the distinction may matter.

Third, the approach is Nova-specific. If your application also runs count(*) in Eloquent queries, API endpoints, or Artisan commands, those queries still scan the full table.

How Gold Lapel handles count(*)
-- What Gold Lapel does with count(*) queries.
--
-- Nova sends:
SELECT count(*) AS aggregate FROM "orders";

-- Without Gold Lapel:
--   PostgreSQL scans 300,000 rows. 31 seconds. Every page load.

-- With Gold Lapel:
--   First request: query executes normally. Result cached. 31 seconds.
--   Second request: served from cache. < 1 millisecond.
--
-- When does the cache invalidate?
--   When Gold Lapel observes a write to the "orders" table:
--   INSERT, UPDATE, DELETE, TRUNCATE — any DML that changes row count.
--   The cached count is evicted. Next read executes fresh.
--
-- The result: exact counts (not estimates), at cache speed,
-- with automatic invalidation. No query changes. No trait.
-- No countForPagination override. Just composer require goldlapel/goldlapel-laravel.

-- Your Nova resource stays completely standard:
-- class Order extends Resource { ... }
-- No workarounds. The proxy handles it.

Gold Lapel sits between your Laravel application and PostgreSQL as a transparent proxy. It observes the SELECT count(*) FROM orders query, caches the result, and serves it on subsequent requests in under a millisecond. When a write hits the orders table — an INSERT, UPDATE, DELETE, or TRUNCATE — the cached count is automatically invalidated. The next read executes fresh against PostgreSQL and re-caches the result.

The difference from the estimated-count approach: you get exact counts at cache speed, with no query changes — just composer require goldlapel/goldlapel-laravel and the wrapper handles the rest. Your Nova resources stay completely standard. No trait. No override. No per-table configuration. The proxy handles it for every table, every query, every framework component that happens to run count(*).

The difference from the application-level cached count: invalidation is handled at the wire protocol level, not by model observers. Every write path — Eloquent, raw SQL, COPY, database triggers, external ETL processes — triggers invalidation because Gold Lapel sees the write as it passes through the proxy. There are no forgotten observers, no bypass paths, no stale caches from bulk imports.

The write-aware invalidation is the part that matters. A naive cache with a TTL would serve stale counts after writes — the user creates an order and the count does not change. Gold Lapel's invalidation is triggered by the write itself, not by a timer. The count updates when the data updates, which is when the user expects it to update.

Choosing your approach

I have presented several options, and I owe you a direct recommendation rather than leaving you to sort through them.

If you have one or two large tables and no soft deletes: use the UsesEstimatedCounts trait. It is simple, effective, and requires no infrastructure changes. You will solve the problem in fifteen minutes.

If you have many large tables: use the trait on all of them, and add a test or CI check that verifies every Nova resource with a table over 50,000 rows includes the trait. The cost of forgetting to add it grows with the number of resources.

If you use soft deletes and the overcount bothers you: use the UsesCachedCounts trait with a short TTL (2-5 minutes). Accept that the first request after cache expiry pays the full count cost, and tune the TTL based on your tolerance for staleness.

If you have many large tables across multiple applications sharing the same PostgreSQL instance: consider Gold Lapel. The transparent proxy approach eliminates the count problem across all applications, all frameworks, all query paths, without per-resource configuration.

For most Laravel applications with Nova, the estimated count trait is the right answer. It is boring, reliable, and fast. In infrastructure, boring is the highest compliment available.

Frequently asked questions

Terms referenced in this article

The sequential scans that Nova's count(*) provokes are, in truth, one facet of a broader Laravel performance conversation. I have prepared a comprehensive Laravel PostgreSQL performance tuning guide that addresses N+1 queries, indexing strategy, and the Eloquent patterns that scale — and the ones that quietly do not.