Laravel PostgreSQL Performance Tuning Guide
A watercolour of Eloquent was attempted. It loaded every colour on the palette when only two were needed.
Eloquent Is a Gentleman's Agreement
Eloquent is one of the most expressive ORMs in any language. It lets you interact with PostgreSQL through fluent, readable PHP that maps cleanly to your domain objects. For many applications, this abstraction is a genuine asset — it reduces boilerplate, enforces consistent patterns, and lets developers move quickly without writing raw SQL.
The difficulty is that Eloquent's defaults are optimized for developer convenience, not database performance. Every $user->posts property access can silently generate a query. Every withCount() adds a correlated subquery. Every chunk() call uses OFFSET pagination that degrades as you go deeper into the result set.
None of these are bugs. They are trade-offs — perfectly reasonable ones for small datasets and low-traffic applications. But as your tables grow from thousands to millions of rows, and your traffic scales from dozens to hundreds of concurrent requests, these defaults compound into measurable latency. This is worth attending to.
This guide covers the full range of PostgreSQL performance techniques available to Laravel developers: eager loading strategies, large result set processing, full-text search, connection management, raw queries, query builder patterns, configuration tuning, and monitoring. Each section provides enough context to act on immediately, with links to dedicated deep-dive articles for topics that warrant extended treatment.
Eager Loading — The N+1 Cure That Needs Its Own Diagnosis
The N+1 query problem is the most frequently encountered Eloquent performance issue. It occurs when you load a collection of models, then access a relationship on each one individually:
// N+1: 1 query to fetch users, then 1 query per user to fetch posts
$users = User::all();
foreach ($users as $user) {
echo $user->posts->count(); // Each access fires a SELECT
} If you have 100 users, this generates 101 queries. Eloquent's with() method solves this by batching the related model queries:
// 2 queries total: one for users, one for all related posts
$users = User::with('posts')->get();
foreach ($users as $user) {
echo $user->posts->count(); // Already loaded in memory
} Nested Eager Loading
You can eager load nested relationships using dot notation:
$users = User::with('orders.items.product')->get(); This generates one query per relationship level — four queries total in this case. When the chain gets deep (three or more levels), it is worth pausing to ask whether you actually need all that data.
Constraining Eager Loads
Eager loads can be constrained with a closure to filter the related records:
$users = User::with(['orders' => function ($query) {
$query->where('status', 'active')
->where('created_at', '>=', now()->subYear())
->orderBy('created_at', 'desc');
}])->get(); This is particularly valuable for relationships with large datasets. Without the constraint, with('orders') loads every order for every user. The constraint lets PostgreSQL handle the filtering, reducing both query time and memory usage.
withCount vs with
When you need the count of related records rather than the records themselves, withCount() is the more considerate approach:
// Good: adds a single subquery, returns a count
$users = User::withCount('posts')->get();
// Access via $user->posts_count
// Wasteful: loads every post into memory just to count them
$users = User::with('posts')->get();
// Then: $user->posts->count() Automatic Eager Loading
You can define relationships to be automatically eager-loaded by setting the $with property on your model:
class User extends Model
{
protected $with = ['profile', 'role'];
} Every query for User will now automatically include profile and role. This is convenient for relationships that are almost always needed, but it can work against you when queries do not need those relationships. Override per-query with without():
// Skip automatic eager loading for this query
$userIds = User::without(['profile', 'role'])->pluck('id'); Use $with sparingly. For a deeper examination, see Eloquent Eager Loading Anti-Patterns.
whereHas and the Subquery Trap
whereHas() filters parent models based on the existence of related records:
// Find users who have at least one published post
$users = User::whereHas('posts', function ($query) {
$query->where('status', 'published');
})->get(); This generates a correlated subquery:
SELECT * FROM users
WHERE EXISTS (
SELECT 1 FROM posts
WHERE posts.user_id = users.id
AND posts.status = 'published'
); On a users table with millions of rows and no supporting index on posts.user_id, this can become expensive. For complex conditions on large tables, consider rewriting as an explicit JOIN:
$users = User::select('users.*')
->join('posts', 'posts.user_id', '=', 'users.id')
->where('posts.status', 'published')
->distinct()
->get(); The join version lets PostgreSQL use a hash join or merge join instead of a correlated subquery. See Eloquent whereHas PostgreSQL Performance for detailed benchmarks.
withCount and Aggregate Optimization
withCount() generates a correlated subquery for each aggregate. One is manageable. Five begins to add up:
// Five correlated subqueries — each scans the related table
$users = User::withCount(['posts', 'comments', 'orders', 'reviews', 'logins'])->get(); For multiple aggregates, a lateral join that performs all aggregation in a single scan is more efficient:
$users = User::select('users.*', 'agg.posts_count', 'agg.comments_count')
->joinSub(
DB::raw('
SELECT user_id,
COUNT(*) FILTER (WHERE type = \'post\') as posts_count,
COUNT(*) FILTER (WHERE type = \'comment\') as comments_count
FROM activities
GROUP BY user_id
'),
'agg',
'agg.user_id', '=', 'users.id'
)
->get(); For a full analysis, see Eloquent withCount PostgreSQL Performance.
Chunk, Cursor, and Lazy — Processing Large Result Sets
When you need to iterate over thousands or millions of rows, loading everything into memory at once simply will not do. Laravel provides four strategies, each with different memory and connection trade-offs.
chunk()
User::orderBy('id')->chunk(1000, function ($users) {
foreach ($users as $user) {
// Process each user
}
}); Each batch is a separate query. Memory stays bounded because only 1,000 models exist at a time. The difficulty is OFFSET — PostgreSQL must scan and discard all rows before the offset, so performance degrades linearly with depth.
chunkById()
User::orderBy('id')->chunkById(1000, function ($users) {
foreach ($users as $user) {
// Process each user
}
}); Instead of LIMIT 1000 OFFSET 9000, this generates WHERE id > 9000 LIMIT 1000. PostgreSQL uses the primary key index to jump directly to the right row, making every batch equally fast regardless of depth.
cursor()
foreach (User::orderBy('id')->cursor() as $user) {
// Only one User model in memory at a time
} Memory usage is minimal — only one model exists at any given time. But cursor() holds the database connection open for the entire iteration.
lazy() and lazyById()
foreach (User::orderBy('id')->lazyById(1000) as $user) {
// Generator-based, batched with keyset pagination
} Under the hood, lazyById() fetches 1,000 rows, yields them one at a time through the generator, then fetches the next batch. The connection is released between batches. This gives you the ergonomics of cursor() with the connection safety of chunkById().
When to Use Each
| Method | Memory | Connection | Performance at Depth | Best For |
|---|---|---|---|---|
chunk() | Bounded (batch size) | Released between batches | Degrades (OFFSET) | Small tables, simple processing |
chunkById() | Bounded (batch size) | Released between batches | Constant | Large tables, batch jobs |
cursor() | Minimal (one row) | Held for entire iteration | N/A (single query) | Small result sets, read-only |
lazyById() | Minimal (one row) | Released between batches | Constant | Large tables, streaming exports |
For most production use cases, lazyById() is the right default. See Laravel Chunk, Cursor, and Lazy with PostgreSQL for detailed benchmarks.
Full-Text Search with Scout and tsvector
PostgreSQL has a full-text search engine built in. It does not offer every feature that Elasticsearch or Meilisearch provide — those are excellent, purpose-built tools — but for many applications, PostgreSQL's native capabilities eliminate the need for a separate search service entirely.
The Basics: tsvector and tsquery
-- Basic full-text search
SELECT * FROM articles
WHERE to_tsvector('english', title || ' ' || body) @@ plainto_tsquery('english', 'database performance'); The to_tsvector() function normalizes the text: it lowercases, removes stop words, and stems words. The plainto_tsquery() function converts a search string into a tsquery with AND between terms.
Stored tsvector Columns and GIN Indexes
The standard approach is to store the precomputed vector and index it:
-- Add a tsvector column
ALTER TABLE articles ADD COLUMN search_vector tsvector;
-- Populate it
UPDATE articles SET search_vector = to_tsvector('english', coalesce(title, '') || ' ' || coalesce(body, ''));
-- Create a GIN index
CREATE INDEX idx_articles_search ON articles USING gin(search_vector); Using Full-Text Search in Laravel
Laravel does not have built-in support for tsvector queries, but you can use raw expressions within Eloquent:
// Search using the stored tsvector column
$articles = Article::whereRaw(
"search_vector @@ plainto_tsquery('english', ?)",
[$searchTerm]
)
->orderByRaw("ts_rank(search_vector, plainto_tsquery('english', ?)) DESC", [$searchTerm])
->get(); When to Use What
| Approach | Speed | Features | Complexity |
|---|---|---|---|
ILIKE '%term%' | Slow (no index unless trigram) | Pattern matching only | Minimal |
tsvector + GIN | Fast | Stemming, ranking, language support | Moderate |
| Elasticsearch/Meilisearch | Fast | Fuzzy matching, typos, facets, suggestions | High (separate service) |
For full integration details, see Laravel Scout with PostgreSQL tsvector.
Horizon and Connection Management
Every Laravel process that talks to PostgreSQL holds a database connection. In a typical deployment, there are often more processes than one might expect: php-fpm workers (up to 50 connections from the web tier alone), queue workers (20 connections across four queues), scheduler, and various artisan commands.
PostgreSQL's default max_connections is 100. A deployment with 50 php-fpm workers and 20 queue workers is already at 70% capacity before accounting for monitoring tools, migrations, or admin connections. This is the sort of arithmetic that benefits from early attention.
PgBouncer for Connection Pooling
PgBouncer sits between your Laravel application and PostgreSQL, multiplexing many application connections onto fewer PostgreSQL connections. In transaction mode, a PostgreSQL connection is assigned only for the duration of a transaction and returned to the pool immediately after.
However, PgBouncer in transaction mode has an important incompatibility with PDO prepared statements. The solution:
// config/database.php
'pgsql' => [
'driver' => 'pgsql',
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '5432'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'options' => [
PDO::ATTR_EMULATE_PREPARES => true, // Disables server-side prepared statements
],
], With PDO::ATTR_EMULATE_PREPARES set to true, PDO handles parameter binding client-side and sends the final SQL string to PostgreSQL. This makes every query stateless from the connection's perspective.
For the full details, see Laravel, PgBouncer, and Prepared Statements and Connection Pooling.
Raw Queries — When Eloquent Steps Aside
Eloquent covers the common cases well, but PostgreSQL has capabilities that no ORM can fully abstract: CTEs, window functions, lateral joins, recursive queries, and advanced JSONB operations. When you need these, Laravel's raw query API provides direct access.
The Raw Query API
// SELECT queries with parameter binding
$users = DB::select('SELECT * FROM users WHERE created_at > :since', [
'since' => '2025-01-01',
]);
// INSERT, UPDATE, DELETE — returns affected row count
$affected = DB::update('UPDATE users SET active = true WHERE last_login > :since', [
'since' => now()->subMonth(),
]);
// DDL and other statements that don't return results
DB::statement('CREATE INDEX CONCURRENTLY idx_users_email ON users (email)'); Raw Expressions Within Eloquent
// selectRaw: add computed columns
$users = User::selectRaw('users.*, extract(year FROM created_at) as signup_year')
->get();
// whereRaw: PostgreSQL-specific WHERE conditions
$articles = Article::whereRaw("search_vector @@ plainto_tsquery('english', ?)", [$term])
->get();
// orderByRaw: custom ordering logic
$products = Product::orderByRaw('
CASE WHEN stock > 0 THEN 0 ELSE 1 END,
price ASC
')->get(); CTEs and Window Functions
// CTE: find users with their order rank
$results = DB::select("
WITH ranked_orders AS (
SELECT
user_id,
order_id,
total,
ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY total DESC) as rank
FROM orders
WHERE created_at >= :since
)
SELECT u.name, ro.order_id, ro.total
FROM users u
JOIN ranked_orders ro ON ro.user_id = u.id
WHERE ro.rank = 1
", ['since' => now()->subYear()]); // Recursive CTE: traverse a category tree
$tree = DB::select("
WITH RECURSIVE category_tree AS (
-- Base case: root categories
SELECT id, name, parent_id, 0 as depth
FROM categories
WHERE parent_id IS NULL
UNION ALL
-- Recursive case: children
SELECT c.id, c.name, c.parent_id, ct.depth + 1
FROM categories c
JOIN category_tree ct ON c.parent_id = ct.id
)
SELECT * FROM category_tree ORDER BY depth, name
"); Query Scopes Wrapping Raw SQL
You can encapsulate raw SQL complexity within Eloquent query scopes, preserving the fluent API while using PostgreSQL-specific features under the hood:
class Article extends Model
{
public function scopeSearch($query, string $term)
{
return $query
->whereRaw("search_vector @@ plainto_tsquery('english', ?)", [$term])
->orderByRaw("ts_rank(search_vector, plainto_tsquery('english', ?)) DESC", [$term]);
}
public function scopePublishedInPeriod($query, $start, $end)
{
return $query->whereRaw(
'published_at BETWEEN ? AND ?',
[$start, $end]
);
}
}
// Clean, readable usage
$articles = Article::search('database performance')
->publishedInPeriod('2025-01-01', '2025-12-31')
->paginate(20); Migrations with Raw SQL
Laravel migrations support raw SQL for PostgreSQL-specific DDL that the schema builder cannot express:
public function up()
{
// Partial index — only index active users
DB::statement('
CREATE INDEX idx_users_active_email ON users (email)
WHERE active = true
');
// Expression index — index on lower(email)
DB::statement('
CREATE INDEX idx_users_lower_email ON users (lower(email))
');
// GIN index on JSONB column
DB::statement('
CREATE INDEX idx_products_metadata ON products USING gin(metadata)
');
} These indexes — partial indexes, expression indexes, and GIN indexes — are among PostgreSQL's most valuable performance tools.
Query Builder Patterns for PostgreSQL
Laravel's query builder includes several methods that map directly to PostgreSQL-specific features.
Upsert
The upsert() method maps to PostgreSQL's INSERT ... ON CONFLICT:
// Insert or update based on the email column
User::upsert(
[
['email' => 'alice@example.com', 'name' => 'Alice', 'login_count' => 1],
['email' => 'bob@example.com', 'name' => 'Bob', 'login_count' => 1],
],
['email'], // Unique columns (conflict target)
['name', 'login_count'] // Columns to update on conflict
); This generates:
INSERT INTO users (email, name, login_count)
VALUES ('alice@example.com', 'Alice', 1), ('bob@example.com', 'Bob', 1)
ON CONFLICT (email) DO UPDATE SET name = EXCLUDED.name, login_count = EXCLUDED.login_count; The upsert() method is atomic and efficient — it avoids the read-then-write race condition of separate SELECT + INSERT/UPDATE logic.
JSONB Queries
// whereJsonContains: uses the @> operator
$products = Product::whereJsonContains('metadata->tags', 'featured')->get();
// Arrow syntax for nested JSON access
$users = User::where('preferences->theme', 'dark')->get();
// whereJsonLength for array length checks
$products = Product::whereJsonLength('metadata->tags', '>', 3)->get(); For JSONB queries to use an index, you need a GIN index on the JSONB column. The @> containment operator (used by whereJsonContains) is GIN-indexable. The ->> text extraction operator is not — for frequently queried keys, consider an expression index.
Debugging Queries
// Get the SQL with placeholders
$sql = User::where('active', true)->toSql();
// "select * from "users" where "active" = ?"
// Get the SQL with bindings substituted (Laravel 10+)
$sql = User::where('active', true)->toRawSql();
// "select * from "users" where "active" = true"
// Run EXPLAIN on a query
$plan = User::where('active', true)->explain(); The toRawSql() method is particularly useful for debugging — you can copy the output directly into psql or pgAdmin.
Database Configuration Tuning
Setting Statement and Lock Timeouts
// In AppServiceProvider::boot()
DB::listen(function ($query) {
if ($query->time > 1000) {
Log::warning('Slow query', [
'sql' => $query->sql,
'time' => $query->time,
'connection' => $query->connectionName,
]);
}
});
// Set session parameters on connection
Event::listen(DatabaseConnected::class, function ($event) {
if ($event->connectionName === 'pgsql') {
$event->connection->statement("SET statement_timeout = '30s'");
$event->connection->statement("SET lock_timeout = '10s'");
$event->connection->statement("SET idle_in_transaction_session_timeout = '60s'");
}
}); The statement_timeout parameter is essential in production — it prevents a single wayward query from holding a connection indefinitely. The idle_in_transaction_session_timeout is particularly important for queue workers.
Read Replicas
Laravel supports read/write connection splitting natively:
'pgsql' => [
'read' => [
'host' => [
env('DB_READ_HOST_1', '127.0.0.1'),
env('DB_READ_HOST_2', '127.0.0.1'),
],
],
'write' => [
'host' => env('DB_WRITE_HOST', '127.0.0.1'),
],
'sticky' => true,
'driver' => 'pgsql',
'database' => env('DB_DATABASE', 'forge'),
// ... remaining config
], The sticky option is important: when set to true, after a write operation, all subsequent reads within the same request are sent to the primary. This prevents stale reads from replication lag.
Monitoring and Debugging
Performance tuning without measurement is guesswork — and guesswork, if you'll permit me, is beneath us.
Query Log
// Enable query logging
DB::enableQueryLog();
// Run your application logic
$users = User::with('posts')->where('active', true)->get();
// Inspect the queries
$queries = DB::getQueryLog();
foreach ($queries as $query) {
dump([
'sql' => $query['query'],
'bindings' => $query['bindings'],
'time_ms' => $query['time'],
]);
} This is a development tool only. The query log holds all queries in memory for the lifetime of the request.
Preventing Lazy Loading
// AppServiceProvider::boot()
Model::preventLazyLoading(!app()->isProduction()); This turns every N+1 problem into an immediate, visible error during development.
pg_stat_statements
On the PostgreSQL side, pg_stat_statements is the definitive tool for understanding query performance:
-- Top 20 queries by total execution time
SELECT
calls,
round(total_exec_time::numeric, 2) as total_ms,
round(mean_exec_time::numeric, 2) as mean_ms,
round((100 * total_exec_time / sum(total_exec_time) OVER ())::numeric, 2) as percent_total,
query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20; This reveals the queries consuming the most database time — your highest-impact optimization targets.
The Checklist
Allow me to present a prioritized summary, ordered by typical impact:
| Technique | When to Use | Impact |
|---|---|---|
Eager loading (with()) | Any query that accesses relationships in a loop | High |
chunkById() / lazyById() | Processing more than a few thousand rows | High |
| GIN indexes on tsvector/JSONB | Full-text search or JSONB queries | High |
| PgBouncer connection pooling | More than 50 concurrent database connections | High |
whereRelation() over whereHas() | Filtering by related model attributes on large tables | Medium-High |
upsert() for batch writes | Bulk insert-or-update operations | Medium |
| Read replica splitting | Read-heavy workloads (>80% reads) | Medium |
statement_timeout | Any production deployment | Medium |
| Partial indexes | Queries that consistently filter on the same condition | Medium |
| Raw queries for CTEs/window functions | Analytics, reporting, complex aggregation | Medium |
I would suggest starting at the top. Eager loading and connection management alone resolve the majority of Laravel PostgreSQL performance concerns. The rest is refinement — and refinement is where the real satisfaction lies.