N+1 Query Problem
One query to fetch the list. N more to fetch each related row. A waste of perfectly good round trips, and the most common ORM performance mistake I encounter.
The N+1 query problem occurs when application code fetches a list of N records, then executes a separate database query for each record to load related data. Instead of 1 or 2 queries, the database receives N+1. It is the equivalent of sending someone to the cellar once for each bottle of wine, rather than once with a list. Each trip is quick. The aggregate cost — network round trips, connection overhead, and database load — is not. This is the most common performance anti-pattern in ORM-based applications and is solved by eager loading: telling the ORM to fetch related data in bulk using JOINs or IN queries.
What the N+1 query problem is
The pattern is deceptively simple. Your application needs to display a list of orders along with each order's customer name. The ORM fetches the orders in one query — so far, so good. Then, as you iterate over the results and access the customer relation on each order, the ORM fires a separate SELECT to load that customer. One query per row. Fifty rows, fifty trips to the cellar. The ORM does not mention what it has done. It rarely does.
-- 1 query to fetch all orders
SELECT * FROM orders WHERE status = 'pending';
-- returns 50 rows
-- Then, for EACH order, a separate query to fetch the customer
SELECT * FROM customers WHERE id = 101;
SELECT * FROM customers WHERE id = 102;
SELECT * FROM customers WHERE id = 103;
-- ... 47 more queries
-- Total: 1 + 50 = 51 queries for what should be 1 or 2 The "N+1" name comes from the math: 1 query for the initial list, plus N queries for the related data. With 50 orders, that is 51 queries. With 500 orders, it is 501. If you have two related objects per row — say, customer and shipping address — the count becomes 1 + N + N = 1,001 queries for 500 rows.
The reason this pattern is so pervasive is that each individual query is trivially fast. A primary key lookup on an indexed table returns in under a millisecond. The ORM code that triggers it looks innocent — order.customer.name is just an attribute access. Nothing about it signals that a database round trip is happening. This is the ORM's particular talent: hiding the cost of what it does on your behalf. The problem only reveals itself when you look at the total number of queries per request.
Why it matters
A single query taking 0.5ms is not a problem. Five hundred of them, serialized on the same connection, very much is.
- Network round trips compound — each query requires a round trip between the application and the database. In production, that is 1-5ms of network latency per query. 500 queries at 2ms each adds a full second of latency to the request, even though no individual query is slow.
- Connection overhead — each query consumes a connection slot, goes through the PostgreSQL protocol, gets parsed and planned. With connection pooling, these queries are serialized on the same connection, bottlenecking throughput.
- Database load scales with traffic — an N+1 problem that generates 500 queries per request becomes 500,000 queries per second at 1,000 requests per second. The database is doing orders of magnitude more work than it should be.
- Invisible in development — with a local database, no network latency, and a small dataset, the 50 extra queries add only 5ms. The problem only manifests under production conditions: real network latency, larger datasets, concurrent users. This is why N+1 problems survive code review. They do not look like problems until the household is under load.
The fix — a single JOIN query — replaces all of those round trips with one.
-- The fix: one query with a JOIN
SELECT o.*, c.name, c.email
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.status = 'pending';
-- 1 query instead of 51 Detecting N+1 queries
N+1 problems are easy to fix once you know they exist. The challenge is finding them — the ORM is a quiet accomplice, and the symptoms are diffuse. Three approaches, from database-level to application-level.
pg_stat_statements call counts
The most reliable detection method, and my preferred one. N+1 queries produce a distinctive signature in pg_stat_statements: a query pattern with a very high call count, very low mean execution time, and a query structure that looks like a primary key lookup.
-- Find repetitive query patterns in pg_stat_statements
-- High call counts with low mean time often signal N+1
SELECT
calls,
round(mean_exec_time::numeric, 2) AS mean_ms,
round(total_exec_time::numeric, 2) AS total_ms,
query
FROM pg_stat_statements
WHERE calls > 100
AND mean_exec_time < 5
ORDER BY calls DESC
LIMIT 20; If you see SELECT * FROM customers WHERE id = $1 with 50,000 calls and a mean time of 0.3ms, that is almost certainly an N+1 problem. The individual query is fast. The aggregate cost — 50,000 * 0.3ms = 15 seconds of total database time — is not.
Query logs
Temporarily set log_min_duration_statement = 0 to log every query, then look for the same query pattern repeating dozens or hundreds of times within milliseconds on the same connection.
-- Enable query logging to spot N+1 patterns
-- In postgresql.conf:
log_min_duration_statement = 0 -- log all queries (use briefly!)
log_line_prefix = '%t [%p] %u@%d '
-- Then look for repeated patterns in the log:
-- 2026-03-21 10:00:01 [1234] app@mydb SELECT * FROM customers WHERE id = $1
-- 2026-03-21 10:00:01 [1234] app@mydb SELECT * FROM customers WHERE id = $1
-- 2026-03-21 10:00:01 [1234] app@mydb SELECT * FROM customers WHERE id = $1
-- Same query, same connection, milliseconds apart = N+1 This approach is useful for confirming a specific N+1 pattern but is too noisy for continuous monitoring. Turn it off after you have what you need.
ORM debug logging
Most ORMs can log every query they execute. Django's django.db.backends logger, Rails' ActiveRecord::Base.logger, SQLAlchemy's echo=True engine flag. Enable this in development and watch for repeated query patterns during page loads or API calls. Some frameworks also provide N+1 detection libraries — Django Debug Toolbar, Bullet (Rails), and nplusone (Python) — that flag the problem automatically.
Fixing N+1 queries by framework
Every major ORM provides a mechanism to load related data eagerly — an admission, if you will, that the default behaviour was perhaps too clever for its own good. The syntax varies, but the underlying strategy is the same: either JOIN the related table in a single query, or fetch the related rows with a second query using an IN clause. Both approaches replace N queries with 1 or 2. One trip to the cellar, with a proper list in hand.
Django
# Django: N+1 problem
for order in Order.objects.filter(status='pending'):
print(order.customer.name) # each .customer triggers a query
# Fix with select_related (uses JOIN)
for order in Order.objects.select_related('customer').filter(status='pending'):
print(order.customer.name) # no additional queries
# Fix with prefetch_related (uses IN query)
for order in Order.objects.prefetch_related('items').filter(status='pending'):
for item in order.items.all(): # no additional queries
print(item.name) Rails (ActiveRecord)
# Rails: N+1 problem
Order.where(status: 'pending').each do |order|
puts order.customer.name # each .customer triggers a query
end
# Fix with includes (Rails chooses JOIN or IN automatically)
Order.includes(:customer).where(status: 'pending').each do |order|
puts order.customer.name # no additional queries
end
# Fix with eager_load (always uses JOIN)
Order.eager_load(:customer).where(status: 'pending').each do |order|
puts order.customer.name
end SQLAlchemy
# SQLAlchemy: N+1 problem
orders = session.query(Order).filter_by(status='pending').all()
for order in orders:
print(order.customer.name) # each .customer triggers a query
# Fix with joinedload (uses JOIN)
from sqlalchemy.orm import joinedload
orders = (
session.query(Order)
.options(joinedload(Order.customer))
.filter_by(status='pending')
.all()
)
# Fix with subqueryload (uses IN subquery)
from sqlalchemy.orm import subqueryload
orders = (
session.query(Order)
.options(subqueryload(Order.items))
.filter_by(status='pending')
.all()
) Prisma
// Prisma: N+1 problem
const orders = await prisma.order.findMany({ where: { status: 'pending' } });
for (const order of orders) {
const customer = await prisma.customer.findUnique({
where: { id: order.customerId }
}); // each iteration triggers a query
}
// Fix with include
const orders = await prisma.order.findMany({
where: { status: 'pending' },
include: { customer: true } // fetched in a single query
}); Eloquent (Laravel)
// Eloquent (Laravel): N+1 problem
$orders = Order::where('status', 'pending')->get();
foreach ($orders as $order) {
echo $order->customer->name; // each ->customer triggers a query
}
// Fix with eager loading
$orders = Order::with('customer')->where('status', 'pending')->get();
foreach ($orders as $order) {
echo $order->customer->name; // no additional queries
} How Gold Lapel relates
I should mention that this particular anti-pattern is one Gold Lapel was built to catch. Gold Lapel sits between your application and PostgreSQL as a query proxy, and it sees every query that passes through — including the telltale signature of N+1: the same parameterized query, repeated dozens or hundreds of times in rapid succession on the same connection. The pattern is unmistakable from where we sit.
When Gold Lapel identifies an N+1 sequence, it coalesces those individual queries into a single batched query. Instead of 500 separate SELECT * FROM customers WHERE id = $1 calls, it rewrites them into one SELECT * FROM customers WHERE id IN ($1, $2, ..., $500) and distributes the results back to the application as if the original queries had been executed individually.
This happens transparently. No application code changes. No ORM configuration. The application continues to issue N+1 queries, and Gold Lapel resolves them at the proxy level. I should be clear: this is not a substitute for fixing the ORM code properly — eager loading is still the right long-term approach. But it eliminates the performance impact immediately while you address the underlying code, which is rather the point of having staff who can anticipate what needs doing.