← Node.js & Edge Frameworks

Prisma's relationLoadStrategy: When LATERAL JOINs Backfire on PostgreSQL

Two strategies. One correct answer per situation. Permit me to save you the 15-second page load.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 24 min read
The illustrator attempted a LATERAL JOIN and has been nested in loops ever since.

Good evening. You have upgraded Prisma and something has gone rather wrong.

Perhaps your API endpoint that used to respond in 200ms is now taking 15 seconds. Perhaps a dashboard page that loaded crisply now hangs until the browser gives up. Perhaps you have been reading GitHub Discussion #22288 with a growing sense of recognition, scrolling through reports from developers whose production applications have begun behaving as though they are running on hardware from a previous century.

The culprit, in all likelihood, is relationLoadStrategy — a feature introduced in Prisma 5.8 that changed how Prisma loads related data from PostgreSQL. The feature offers two strategies: "join" and "query". The "join" strategy uses LATERAL JOINs to fetch everything in a single SQL statement. The "query" strategy uses multiple separate queries with IN clauses and merges the results in Prisma's Rust-based query engine.

On paper, one query should always beat five queries. In practice, that one query can be catastrophically worse — and the conditions that trigger the regression are precisely the ones that appear in real applications: nested relations, pagination, and result sets larger than a handful of rows.

I should note that there is nothing inherently wrong with LATERAL JOINs. They are a powerful, well-specified SQL feature that I hold in genuine esteem. The problem is not the tool. The problem is reaching for it in situations where it does not belong — the way one might use a crystal decanter to hammer a nail. Technically capable. Structurally inadvisable.

Allow me to walk through what is happening, why it happens, and how to choose the right strategy for each situation. We shall examine the SQL that Prisma generates, read the query plans that PostgreSQL produces, and arrive at a set of rules that will prevent the sort of page-load times that prompt emergency Slack messages.

A brief word on LATERAL JOINs, for those who have not had the pleasure

Before we examine what Prisma does with LATERAL JOINs, it is worth understanding what they are and why they exist. A LATERAL subquery in the FROM clause is permitted to reference columns from preceding FROM items — something that ordinary subqueries in the FROM clause cannot do.

LATERAL JOIN fundamentals
-- A LATERAL subquery can reference columns from preceding FROM items
-- This is a standard SQL feature, not a Prisma invention

-- Without LATERAL: the subquery is independent
SELECT u.*, (SELECT count(*) FROM orders WHERE user_id = u.id)
FROM users u;

-- With LATERAL: the subquery references the outer table directly
SELECT u.*, o.*
FROM users u
LEFT JOIN LATERAL (
  SELECT *
  FROM orders
  WHERE orders.user_id = u.id   -- references u from the outer query
  ORDER BY created_at DESC
  LIMIT 5                        -- can apply per-row LIMIT
) AS o ON true;

-- LATERAL is powerful because each subquery execution
-- can see the current row from the outer table.
-- But "can see the current row" means "must execute per row."
-- That is both the feature and the cost.

This is genuinely useful. LATERAL allows you to express "for each row in the outer table, run this subquery" in a way that the planner can optimize. Top-N-per-group queries, for example, become elegant with LATERAL: give me the 5 most recent orders for each user, using an index scan that stops after 5 rows per user rather than fetching all orders and filtering.

The feature is not the problem. The application of the feature — four nested LATERAL subqueries on one-to-many relations across thousands of parent rows — is the problem. LATERAL was designed for correlated subqueries where the correlation is the point. Prisma uses it as a general-purpose relation loading mechanism, which is rather like using a tailor's scissors to cut the hedges. They are excellent scissors. The hedges are not their domain.

What does each strategy actually do?

To understand why the choice matters, we need to see what PostgreSQL receives in each case. I shall present the SQL, because the SQL is where performance lives. Not in abstractions. Not in documentation. In the actual statements that arrive at PostgreSQL's parser.

Consider a typical e-commerce schema:

schema.prisma
// schema.prisma — a typical e-commerce data model
model User {
  id        Int      @id @default(autoincrement())
  email     String   @unique
  orders    Order[]
  profile   Profile?
}

model Order {
  id        Int         @id @default(autoincrement())
  total     Decimal
  status    String
  userId    Int
  user      User        @relation(fields: [userId], references: [id])
  items     OrderItem[]
  createdAt DateTime    @default(now())
}

model OrderItem {
  id        Int     @id @default(autoincrement())
  quantity  Int
  price     Decimal
  productId Int
  orderId   Int
  order     Order   @relation(fields: [orderId], references: [id])
  product   Product @relation(fields: [productId], references: [id])
}

model Product {
  id    Int         @id @default(autoincrement())
  name  String
  items OrderItem[]
}

model Profile {
  id     Int    @id @default(autoincrement())
  bio    String
  userId Int    @unique
  user   User   @relation(fields: [userId], references: [id])
}

Five tables, four relations, three levels of nesting. This is not an exotic schema. I have seen this exact structure — or something very close to it — in every e-commerce application, every order management system, and roughly half of all SaaS products I have had occasion to examine.

Now, a Prisma query that loads users with their orders, order items, products, and profiles — a common pattern for an admin dashboard or API endpoint:

// Prisma 5.8+ — choosing a relation load strategy
const users = await prisma.user.findMany({
  relationLoadStrategy: "join",   // or "query"
  include: {
    orders: {
      include: {
        items: {
          include: {
            product: true
          }
        }
      }
    },
    profile: true
  }
});

The relationLoadStrategy flag changes nothing about what data you receive. The JavaScript objects that Prisma returns are identical regardless of strategy. What changes is how that data is fetched from PostgreSQL — and that difference can span four orders of magnitude.

The "join" strategy

With relationLoadStrategy: "join", Prisma generates a single query using PostgreSQL's LATERAL JOIN syntax. Each relation becomes a LATERAL subquery that references the parent table's primary key:

Single LATERAL JOIN query
-- What "join" generates: a single query with LATERAL JOINs
SELECT
  "t1"."id", "t1"."email",
  "orders"."id" AS "orders.id", "orders"."total", "orders"."status",
  "items"."id" AS "items.id", "items"."quantity", "items"."price",
  "product"."id" AS "product.id", "product"."name",
  "profile"."id" AS "profile.id", "profile"."bio"
FROM "User" AS "t1"
LEFT JOIN LATERAL (
  SELECT "t2".*
  FROM "Order" AS "t2"
  WHERE "t2"."userId" = "t1"."id"
) AS "orders" ON true
LEFT JOIN LATERAL (
  SELECT "t3".*
  FROM "OrderItem" AS "t3"
  WHERE "t3"."orderId" = "orders"."id"
) AS "items" ON true
LEFT JOIN LATERAL (
  SELECT "t4".*
  FROM "Product" AS "t4"
  WHERE "t4"."id" = "items"."productId"
) AS "product" ON true
LEFT JOIN LATERAL (
  SELECT "t5".*
  FROM "Profile" AS "t5"
  WHERE "t5"."userId" = "t1"."id"
) AS "profile" ON true

This is one SQL statement. One network round trip. One result set. Prisma's query engine receives a flat table of rows and reconstructs the nested object hierarchy by de-duplicating parent columns. Elegant, in principle.

I wish to draw your attention to the ON true clauses. A LATERAL JOIN's correlation is expressed inside the subquery's WHERE clause, not in the ON condition. The ON true simply means "include every row the subquery returns." This is syntactically correct and semantically meaningful — but it also means PostgreSQL cannot use the ON condition to filter rows. The filtering happens inside each LATERAL subquery, per outer row. Every. Single. Time.

The "query" strategy

With relationLoadStrategy: "query", Prisma sends separate queries for each relation level and merges the results in its Rust-based query engine:

Multiple queries with IN clauses
-- What "query" generates: separate queries, merged in Prisma's engine
-- Query 1:
SELECT "id", "email" FROM "User";

-- Query 2:
SELECT "id", "total", "status", "userId"
FROM "Order"
WHERE "userId" IN (1, 2, 3, 4, 5, ...);

-- Query 3:
SELECT "id", "quantity", "price", "orderId", "productId"
FROM "OrderItem"
WHERE "orderId" IN (101, 102, 103, ...);

-- Query 4:
SELECT "id", "name"
FROM "Product"
WHERE "id" IN (501, 502, 503, ...);

-- Query 5:
SELECT "id", "bio", "userId"
FROM "Profile"
WHERE "userId" IN (1, 2, 3, 4, 5, ...);

Five queries. Five round trips. More network overhead. But each query is simple, predictable, and independently optimizable by PostgreSQL's planner. There are no correlated subqueries. No nested loops. Each query scans an index once and returns all matching rows.

This is the approach Prisma used exclusively before v5.8. It is, architecturally speaking, a controlled variant of the N+1 pattern — except instead of N individual queries, it batches each relation level into a single IN query. Call it an N+1 with good manners. The batching is the crucial difference: instead of 500 queries for 500 users' orders, you get 1 query with 500 IDs in an IN clause. PostgreSQL handles IN clauses with bitmap scans, which are among its most efficient access patterns.

The anatomy of a cartesian expansion

Before we proceed to the benchmark results, I should like to make the row duplication problem visceral. It is one thing to say "parent rows are duplicated." It is another to see what that means for a single user.

Row duplication in the LATERAL JOIN result set
-- What the LATERAL JOIN result set looks like for ONE user
-- User #42 has 2 orders, each with 3 items

-- Row 1: user_42 | order_101 | item_1001 | product_A | profile_42
-- Row 2: user_42 | order_101 | item_1002 | product_B | profile_42
-- Row 3: user_42 | order_101 | item_1003 | product_C | profile_42
-- Row 4: user_42 | order_102 | item_1004 | product_A | profile_42
-- Row 5: user_42 | order_102 | item_1005 | product_D | profile_42
-- Row 6: user_42 | order_102 | item_1006 | product_B | profile_42

-- user_42's email appears 6 times
-- profile_42's bio appears 6 times
-- order_101's total appears 3 times
-- order_102's total appears 3 times
--
-- Prisma's engine must then de-duplicate all of this
-- back into a nested object structure.
--
-- With "query", each of these values appears exactly once.

User #42's email — perhaps a 40-character string — is transmitted 6 times instead of once. Their profile bio — perhaps 500 characters — is transmitted 6 times. Multiply this by 500 users, each with their own orders and items, and the waste becomes structural rather than incidental.

Data transfer comparison
-- Cartesian expansion math for the LATERAL JOIN result set
--
-- Given: 500 users, avg 4 orders each, avg 4 items per order
--
-- Without duplication ("query" strategy):
--   Users:       500 rows ×  56 bytes =   28,000 bytes
--   Orders:    2,000 rows ×  64 bytes =  128,000 bytes
--   Items:     8,000 rows ×  48 bytes =  384,000 bytes
--   Products:    342 rows ×  32 bytes =   10,944 bytes
--   Profiles:    500 rows ×  40 bytes =   20,000 bytes
--   Total:    11,342 rows             =  570,944 bytes (~557 KB)
--
-- With LATERAL JOIN duplication ("join" strategy):
--   Each leaf row carries ALL parent columns:
--   8,247 rows × (56 + 64 + 48 + 32 + 40) bytes
--   = 8,247 × 240 bytes
--   = 1,979,280 bytes (~1.9 MB)
--
-- The "join" strategy transfers 3.5x more data over the wire
-- for the same logical result set.

The "join" strategy transfers 3.5x more raw data over the wire for the same logical result. And this ratio worsens with wider tables. If your User model includes a bio field, a settings JSON column, or an avatar_url, every byte of those fields is duplicated for every order item. The duplication scales with the product of parent column width and child row count.

Prisma's query engine must then de-duplicate this expanded result set back into a nested object structure. This is not free. The engine allocates memory for the full cartesian result, iterates through it, and reconstructs the hierarchy. For 500 users with 8,000 order items, that means processing 8,247 rows to produce what is logically an 11,342-row result. The engine works harder and uses more memory to deliver the same objects.

When does the LATERAL JOIN strategy go wrong?

The LATERAL JOIN approach has three failure modes, all of which compound in real applications. I have listed them in ascending order of severity, because if I led with the worst one, you might not believe me until you had seen the others first.

Failure mode 1: Cartesian row expansion

A LATERAL JOIN produces a row for every combination of parent and child matches. If a user has 4 orders, each order has 4 items, and each item has 1 product, a single user produces 16 rows in the result set. The user's email is repeated 16 times. The order's total is repeated 4 times.

With 500 users averaging 4 orders of 4 items each, the result set contains 8,000+ rows — most of which are duplicated parent data. The "query" strategy returns the same information in roughly 500 + 2,000 + 8,000 + 342 + 500 = 11,342 rows with zero duplication. The data transferred is often smaller despite more round trips.

This is not merely an aesthetic concern. The duplicated data must be transmitted over the wire, parsed by Prisma's engine, and de-duplicated back into objects. For large result sets, this overhead is measurable — and it grows quadratically with nesting depth, not linearly.

Failure mode 2: Nested loop multiplication

PostgreSQL's planner typically executes LATERAL subqueries using nested loops. Each nesting level multiplies the loop count. With the schema above:

  • 500 users — scan once
  • 500 profile lookups — 1 per user (loops=500)
  • 500 order lookups — 1 per user (loops=500), returning ~2,000 rows
  • 2,000 order-item lookups — 1 per order (loops=2,034)
  • 8,000 product lookups — 1 per order item (loops=8,247)

That is 11,281 index lookups in a single query. Each lookup is fast individually — a fraction of a millisecond — but they accumulate. At 0.02ms per lookup, 11,281 lookups consume 225ms just in index probe time, before accounting for any I/O wait, buffer cache misses, or lock contention. And 0.02ms per lookup is the optimistic case — when the relevant index pages are in shared_buffers. If they are not, each lookup may require a disk read.

EXPLAIN ANALYZE — join strategy, 15.8 seconds
-- EXPLAIN ANALYZE on the LATERAL JOIN query
-- 500 users, ~2,000 orders, ~8,000 order items
Nested Loop Left Join  (cost=0.43..847291.20 rows=500 width=312)
  (actual time=1.24..15842.31 rows=8247 loops=1)
  ->  Nested Loop Left Join  (cost=0.43..421890.40 rows=500 width=248)
        (actual time=0.98..9214.55 rows=8247 loops=1)
        ->  Nested Loop Left Join  (cost=0.43..210201.80 rows=500 width=184)
              (actual time=0.71..4102.88 rows=2034 loops=1)
              ->  Nested Loop Left Join  (cost=0.29..1042.50 rows=500 width=120)
                    (actual time=0.08..12.41 rows=512 loops=1)
                    ->  Seq Scan on "User" as t1
                          (actual time=0.03..1.82 rows=500 loops=1)
                    ->  Index Scan using "Profile_userId_key" on "Profile" as t5
                          (actual time=0.01..0.01 rows=1 loops=500)
                          Index Cond: ("userId" = t1."id")
              ->  Index Scan using "Order_userId_idx" on "Order" as t2
                    (actual time=0.02..7.94 rows=4 loops=500)
                    Index Cond: ("userId" = t1."id")
        ->  Index Scan using "OrderItem_orderId_idx" on "OrderItem" as t3
              (actual time=0.01..2.48 rows=4 loops=2034)
              Index Cond: ("orderId" = orders."id")
  ->  Index Scan using "Product_pkey" on "Product" as t4
        (actual time=0.01..0.01 rows=1 loops=8247)
        Index Cond: ("id" = items."productId")
Planning Time: 2.14 ms
Execution Time: 15847.82 ms

15.8 seconds. For 500 users. Permit me to let that settle.

The loops=8247 on the Product index scan is the critical detail. PostgreSQL reports that it executed that index scan 8,247 times, each time finding 1 row. The actual time=0.01..0.01 is per-loop — 0.01ms times 8,247 loops is 82ms just for the product lookups. But the cost compounds upward through the join tree. Each nested loop's total time includes the time spent in all inner loops.

Now compare the "query" strategy on the same data:

EXPLAIN ANALYZE — query strategy, 10.1ms total
-- EXPLAIN ANALYZE on the separate "query" strategy queries
-- Query 1: Users (0.8ms)
Seq Scan on "User"  (cost=0.00..12.00 rows=500 width=56)
  (actual time=0.02..0.81 rows=500 loops=1)

-- Query 2: Orders with IN clause (3.2ms)
Bitmap Heap Scan on "Order"  (cost=22.45..1842.30 rows=2034 width=64)
  (actual time=0.42..3.18 rows=2034 loops=1)
  Recheck Cond: ("userId" = ANY ('{1,2,3,...}'::integer[]))
  ->  Bitmap Index Scan on "Order_userId_idx"
        (actual time=0.31..0.31 rows=2034 loops=1)

-- Query 3: Order items with IN clause (4.1ms)
Bitmap Heap Scan on "OrderItem"  (cost=45.20..3201.80 rows=8247 width=48)
  (actual time=0.68..4.08 rows=8247 loops=1)
  Recheck Cond: ("orderId" = ANY ('{101,102,...}'::integer[]))
  ->  Bitmap Index Scan on "OrderItem_orderId_idx"
        (actual time=0.52..0.52 rows=8247 loops=1)

-- Query 4: Products with IN clause (1.4ms)
Index Scan using "Product_pkey" on "Product"
  (actual time=0.02..1.38 rows=342 loops=1)
  Index Cond: ("id" = ANY ('{501,502,...}'::integer[]))

-- Query 5: Profiles with IN clause (0.6ms)
Index Scan using "Profile_userId_key" on "Profile"
  (actual time=0.01..0.58 rows=500 loops=1)
  Index Cond: ("userId" = ANY ('{1,2,3,...}'::integer[]))

-- Total: ~10.1ms across 5 queries

10.1 milliseconds across five queries. The same data. The same indexes. A 1,500x difference.

The separate queries use Bitmap Index Scans and IN clauses, which PostgreSQL handles extremely efficiently. No nested loops. No repeated index probes. Each query scans its index once and returns all matching rows in a single pass. The bitmap scan is particularly elegant here: it builds a bitmap of matching heap page addresses, sorts them in physical order, and reads them sequentially. No random I/O. No per-row index probing. One sweep through the index, one sweep through the heap.

Failure mode 3: skip/take with LATERAL is devastating

This is the combination most frequently reported in the GitHub discussions, and it deserves special attention because it affects the most common web application pattern: paginated lists.

// The combination that triggers the worst regressions
const orders = await prisma.order.findMany({
  relationLoadStrategy: "join",
  skip: 200,
  take: 50,
  include: {
    items: {
      include: {
        product: true
      }
    },
    user: {
      include: {
        profile: true
      }
    }
  },
  orderBy: { createdAt: "desc" }
});

// Prisma generates a LATERAL JOIN query with OFFSET 200 LIMIT 50
// applied to the OUTER query — but the LATERAL subqueries
// execute for EVERY candidate row the outer scan considers,
// not just the 50 that survive the LIMIT.

When Prisma applies skip and take to a "join" query, the OFFSET and LIMIT are applied to the outer query — after all the LATERAL subqueries have executed. PostgreSQL must evaluate the LATERAL joins for every candidate row, including the 200 rows that will be discarded by OFFSET.

EXPLAIN ANALYZE — skip/take with join strategy
-- The planner evaluates LATERAL subqueries for rows
-- that will be discarded by OFFSET
Limit  (cost=42891.20..47102.40 rows=50 width=312)
  (actual time=18421.33..18892.71 rows=50 loops=1)
  ->  Nested Loop Left Join  (cost=0.43..2410892.00 rows=28500 width=312)
        (actual time=1.24..18891.44 rows=250 loops=1)
        -- PostgreSQL processes 250 rows (skip 200 + take 50)
        -- through ALL lateral joins before discarding the first 200
        -- With 4 levels of nesting, that is 250 x 4 index lookups
        -- ... plus every intermediate row those laterals produce
Planning Time: 3.82 ms
Execution Time: 18894.20 ms

18.9 seconds to return 50 rows. The planner cannot push the LIMIT down into the LATERAL subqueries because it needs to process and join all intermediate results before it can determine which 50 rows to return. The LATERAL correlation prevents the planner from knowing how many outer rows it needs to process to produce 50 final rows — the join multiplicity makes this unpredictable.

The "query" strategy handles this gracefully because the architecture naturally separates pagination from relation loading:

EXPLAIN ANALYZE — skip/take with query strategy
-- What "query" generates for the same skip/take request

-- Query 1: Orders with OFFSET/LIMIT applied directly (1.2ms)
SELECT "id", "total", "status", "userId", "createdAt"
FROM "Order"
ORDER BY "createdAt" DESC
OFFSET 200 LIMIT 50;
-- Returns exactly 50 rows. Done. Move on.

-- Query 2: Items for ONLY those 50 orders (2.1ms)
SELECT "id", "quantity", "price", "orderId", "productId"
FROM "OrderItem"
WHERE "orderId" IN (4801, 4802, 4803, ..., 4850);
-- 50 IDs in the IN clause. Not 28,500.

-- Query 3: Products for those items (0.9ms)
SELECT "id", "name"
FROM "Product"
WHERE "id" IN (501, 502, 511, ...);

-- Query 4: Users for those 50 orders (0.4ms)
SELECT "id", "email"
FROM "User"
WHERE "id" IN (12, 45, 78, ...);

-- Query 5: Profiles for those users (0.3ms)
SELECT "id", "bio", "userId"
FROM "Profile"
WHERE "userId" IN (12, 45, 78, ...);

-- Total: ~4.9ms across 5 queries
-- Compare with 18,894ms for "join". Factor of 3,856x.

4.9 milliseconds versus 18,894 milliseconds. The same data. The same page. A factor of 3,856x. This is not a marginal improvement. This is the difference between a responsive application and one that times out before the user sees anything.

The structural reason is clear: the "query" strategy applies OFFSET and LIMIT to the first query alone, then loads relations only for the rows that survived pagination. 50 parent rows means 50 IDs in the IN clause — not 250 candidate rows pushed through four levels of LATERAL subqueries.

What the buffer cache tells us

If you'll permit a brief digression into PostgreSQL internals — and I do recommend permitting it, as this is where the performance difference becomes physically tangible.

EXPLAIN ANALYZE with BUFFERS
-- Adding BUFFERS to EXPLAIN ANALYZE reveals I/O cost
EXPLAIN (ANALYZE, BUFFERS) SELECT ...  -- paste the LATERAL JOIN query

-- What to look for:
Nested Loop Left Join
  Buffers: shared hit=42891 read=3201
  -- "shared hit" = pages found in PostgreSQL's buffer cache
  -- "read" = pages fetched from disk
  -- High "read" values mean the working set exceeds shared_buffers
  -- LATERAL JOINs thrash the buffer cache when they probe
  -- thousands of index entries across many different pages

-- Compare with the "query" strategy:
Bitmap Heap Scan on "OrderItem"
  Buffers: shared hit=412
  -- Bitmap scans are cache-friendly: they sort page addresses
  -- and read them in physical order. No random I/O.
  -- 412 buffer hits vs 42,891. That is not a rounding error.

The LATERAL JOIN strategy performs 42,891 buffer hits and 3,201 buffer reads (disk I/O) for a single query. The bitmap scan in the "query" strategy performs 412 buffer hits for the same logical data. That is a 100x difference in buffer cache pressure.

This matters for two reasons. First, those 42,891 buffer accesses contend with every other query running on the same PostgreSQL instance. Each buffer access acquires a lightweight lock on the buffer descriptor. Under high concurrency, this contention becomes visible. Second, the 3,201 disk reads mean the LATERAL query's working set exceeds what shared_buffers can hold — and those disk reads are random, not sequential, because each nested loop index probe jumps to a different page.

The bitmap scan, by contrast, reads pages in physical order. Sequential reads are 10-100x faster than random reads on spinning disks and 2-5x faster on SSDs. The query strategy is not merely doing less work — it is doing fundamentally different work that is better suited to how storage hardware operates.

How do the two strategies compare?

A summary, because a table is worth a thousand qualifications:

Dimension"join""query"Edge
Network round trips11 per relation level (typically 3-6)join
SQL complexitySingle complex queryMultiple simple queriesquery
Result set duplicationParent rows duplicated per childNo duplicationquery
skip/take behaviorLATERAL runs for skipped rowsPagination on parent onlyquery
Deep nesting (3+ levels)Nested loops compoundLinear cost per levelquery
Shallow relations (1 level)Single efficient join2 queries, more overheadjoin
Large parent result setsMassive cartesian expansionBounded by IN clause sizequery
Small parent result sets (<100)Minimal overheadQuery overhead dominatesjoin
PostgreSQL planner accuracySensitive to row estimatesSimple plans, fewer mistakesquery
1:1 relations onlyNo duplication, single queryUnnecessary extra queriesjoin
Wire protocol overheadOne parse/bind/execute cycleMultiple parse/bind/execute cyclesjoin
Connection pool pressureHolds connection for 1 queryHolds connection for N queriesjoin

The pattern is consistent: "join" wins when the query is shallow, the result set is small, and the relations are 1:1. "query" wins in nearly every other scenario — and its failure mode (a few extra milliseconds of network overhead) is dramatically less severe than "join"'s failure mode (15+ second regressions).

I have added three rows to this comparison that deserve commentary. The "join" strategy does hold genuine advantages in wire protocol overhead and connection pool pressure. A single query requires one parse-bind-execute cycle, while five queries require five. If your application connects to PostgreSQL through a high-latency network — say, across regions or through a VPN — those extra round trips accumulate. And holding a connection for five sequential queries keeps it checked out of the pool longer than a single query would.

These are real advantages. They are also, in every benchmark I have conducted, overwhelmed by the nested loop multiplication and cartesian expansion costs when those pathologies are present. The round-trip savings of "join" are measured in single-digit milliseconds. The nested loop penalty is measured in seconds. The arithmetic is not close.

An honest counterpoint: when "join" genuinely wins

A waiter who overstates his case is no waiter at all. I have spent several sections explaining why the "join" strategy can be catastrophic, and I stand by every word. But I should be equally clear about where it is the correct choice.

When you call findUnique or findFirst — loading a single parent record with its relations — the "join" strategy is almost always better. There is no cartesian explosion because there is one parent row. The LATERAL subqueries execute once each, not 500 times. And you save 4 network round trips, which matters when the total query time is measured in milliseconds.

Similarly, when all relations are 1:1 — a user with a profile, an order with a shipping address — the LATERAL JOIN produces no row duplication. Each join adds exactly one row's worth of columns. The result set has the same row count as the parent query. In this scenario, "join" is strictly superior: same data, fewer round trips, one parse-bind-execute cycle.

The danger zone is specifically the combination of findMany + one-to-many relations + nesting. That combination produces the multiplicative loop counts and cartesian expansion that transform a millisecond query into a multi-second ordeal. If your query avoids that combination, "join" is fine. Good, even.

I should also note that network latency varies enormously across deployment topologies. If your application server is in us-east-1 and your PostgreSQL instance is in eu-west-1 — a configuration I have seen more often than I would like — each round trip costs 80-120ms. Five queries at 120ms per round trip is 600ms of pure network overhead, before any query execution. In that scenario, the "join" strategy's single round trip has significant value, even if the query itself takes 200ms longer to execute. Geography, it turns out, matters.

But I should immediately observe that if your application server and database are on different continents, you have a problem that no relation load strategy can solve. The correct response is to move them closer together, not to choose query strategies based on intercontinental latency.

When should you use each strategy?

I shall be direct. These are not tentative suggestions.

Use "join" when:

  • You are loading a single record with its relations (findUnique, findFirst)
  • Relations are shallow — one level of include, not nested
  • Relations are 1:1 — no row duplication
  • No skip/take on the parent query
  • The total expected row count across all relations is under 100
Good use of join strategy
// "join" is appropriate here — shallow, small result set
const user = await prisma.user.findUnique({
  relationLoadStrategy: "join",
  where: { id: userId },
  include: {
    profile: true,          // 1:1 — no duplication
    orders: {
      take: 5,              // bounded, small
      orderBy: { createdAt: "desc" }
    }
  }
});

Use "query" when:

  • You are loading many records with relations (findMany)
  • Relations are nested — 2 or more levels of include
  • Relations are 1:many — each parent produces multiple children
  • You are using skip/take for pagination
  • The parent result set is large (100+ rows)
  • You are uncertain which strategy is appropriate
Good use of query strategy
// "query" is appropriate here — deep nesting, pagination
const orders = await prisma.order.findMany({
  relationLoadStrategy: "query",
  skip: page * pageSize,
  take: pageSize,
  include: {
    items: {
      include: {
        product: {
          include: {
            category: true   // 3 levels deep
          }
        }
      }
    },
    user: {
      include: {
        profile: true
      }
    }
  },
  orderBy: { createdAt: "desc" }
});

If you are uncertain, default to "query". Its worst case is a few extra milliseconds of network latency. The "join" strategy's worst case is a query that takes longer than your HTTP timeout. In performance engineering, asymmetric risk profiles like this have only one correct answer: choose the strategy whose failure mode you can tolerate.

"The ORM did not fail. It did exactly what was asked. It was simply asked poorly."

— from You Don't Need Redis, Chapter 3: The ORM Tax

Automating the decision with middleware

If you would rather not annotate every Prisma query individually — and I would not blame you, as that approach scales poorly and relies on every developer remembering the rules — you can encode the decision in Prisma middleware.

Prisma middleware for automatic strategy selection
// A Prisma middleware to automatically set "query" for risky patterns
import { Prisma } from "@prisma/client";

// Count the depth of nested includes
function includeDepth(include, depth = 0) {
  if (!include || typeof include !== "object") return depth;
  for (const val of Object.values(include)) {
    if (val && typeof val === "object" && val.include) {
      depth = Math.max(depth, includeDepth(val.include, depth + 1));
    } else if (val === true) {
      depth = Math.max(depth, depth + 1);
    }
  }
  return depth;
}

// Apply as Prisma middleware
prisma.$use(async (params, next) => {
  // Only applies to findMany with includes
  if (params.action === "findMany" && params.args?.include) {
    const depth = includeDepth(params.args.include);
    const hasPagination = params.args.skip || params.args.take;

    // Auto-switch to "query" when risk factors are present
    if (depth >= 2 || hasPagination) {
      params.args.relationLoadStrategy = "query";
    }
  }
  return next(params);
});

This middleware inspects the include structure before the query executes and switches to "query" when it detects risk factors: nesting depth of 2 or more, or the presence of pagination. Queries that don't match these patterns use whatever default Prisma assigns — which, for shallow findUnique calls, is likely "join", and that is appropriate.

I should note that this approach has a limitation: it examines the static structure of the query, not the runtime cardinality. A findMany with one level of include and no pagination will not trigger the middleware — but if that query returns 10,000 parent rows with 50,000 child rows, the "join" strategy will still produce a cartesian explosion. The middleware prevents the most common pathologies. It does not prevent all of them. For complete protection, you need to understand the data volumes your queries touch.

Beyond strategy selection: reducing the problem surface

Choosing between "join" and "query" is the most impactful decision, but it is not the only lever available. Two complementary techniques deserve mention.

Use select instead of include

Prisma's include fetches all columns on each related model. If your Product model has 15 columns but your UI only needs name and price, you are transferring 13 unnecessary columns per product — and with the "join" strategy, those unnecessary columns are duplicated across every parent row in the cartesian result.

Using select to reduce column count
// Reduce the problem: select only the fields you need
const orders = await prisma.order.findMany({
  relationLoadStrategy: "query",
  select: {
    id: true,
    total: true,
    status: true,
    // Don't select createdAt, updatedAt, notes, metadata, etc.
    items: {
      select: {
        quantity: true,
        price: true,
        product: {
          select: {
            name: true
            // Don't select description, sku, weight, dimensions, etc.
          }
        }
      }
    },
    user: {
      select: {
        email: true
        // Don't select passwordHash, lastLoginAt, settings, etc.
      }
    }
  }
});

// "select" reduces column count in every query.
// With "join", this shrinks the per-row size in the cartesian product.
// With "query", this reduces data transfer per query.
// Either way, it helps. Use it.

With select, you control exactly which columns appear in each query. This reduces data transfer for both strategies, but it disproportionately benefits "join" because the cartesian duplication amplifies every extra byte. A 500-character bio field that appears in 8,247 duplicated rows contributes 4MB of waste. Drop it from the select, and 4MB disappears.

Use cursor-based pagination instead of skip/take

Even with the "query" strategy, offset-based pagination degrades on deep pages. OFFSET 5000 means PostgreSQL scans and discards 5,000 rows. This is a well-understood limitation of offset pagination, and it affects every ORM and every database.

Cursor-based pagination
// Cursor-based pagination: better than skip/take for both strategies
const orders = await prisma.order.findMany({
  relationLoadStrategy: "query",
  take: 50,
  cursor: lastOrderId ? { id: lastOrderId } : undefined,
  skip: lastOrderId ? 1 : 0,  // skip the cursor itself
  include: {
    items: {
      include: {
        product: true
      }
    },
    user: true
  },
  orderBy: { id: "desc" }
});

// Why this is better:
// 1. No OFFSET — PostgreSQL doesn't scan and discard rows
// 2. The cursor (WHERE id < lastOrderId) uses the primary key index
// 3. Performance is constant regardless of page depth
// 4. With "query" strategy: first query returns 50 rows instantly
//    via index scan, remaining queries load only those 50 rows' relations
//
// With offset-based pagination, page 100 requires scanning 5,000 rows
// to discard 4,950 of them. With cursor pagination, page 100 costs
// the same as page 1.

Cursor pagination eliminates OFFSET entirely. The cursor — typically the last-seen ID — becomes a WHERE condition that uses an index. Page 100 costs the same as page 1. And because the first query returns exactly take rows with no discards, the subsequent "query" strategy IN clauses are optimally sized.

If your application currently uses skip/take and your users paginate beyond page 10, cursor pagination will improve performance regardless of which relation load strategy you choose. Combined with relationLoadStrategy: "query", the improvement is transformative.

How do you diagnose this in your own application?

If you suspect relationLoadStrategy is causing problems, the diagnostic procedure is straightforward. I shall walk through it step by step, because precision in diagnosis prevents wasted effort in treatment.

Step 1: Enable Prisma query logging

// Enable Prisma query logging to see what's actually generated
const prisma = new PrismaClient({
  log: [
    { level: "query", emit: "event" }
  ]
});

prisma.$on("query", (e) => {
  console.log(`Query: ${e.query}`);
  console.log(`Duration: ${e.duration}ms`);
  console.log(`Params: ${e.params}`);
});

// Then run your query and inspect the output.
// With "join": you will see 1 large query with LATERAL JOINs.
// With "query": you will see 3-6 simple queries with IN clauses.

This emits every SQL query that Prisma sends to PostgreSQL, along with its duration and parameters. The output is verbose — intentionally so. You want to see everything.

Step 2: Identify the strategy in use

Reading the Prisma query log
// What to look for in the Prisma query log

// RED FLAG: Single query with multiple LATERAL keywords
// Query: SELECT ... FROM "User" LEFT JOIN LATERAL (...) LEFT JOIN LATERAL (...)
// Duration: 15847ms
//
// This is the "join" strategy. If duration > 500ms, switch to "query".

// HEALTHY: Multiple simple queries with IN clauses
// Query: SELECT "id", "email" FROM "User"
// Duration: 1ms
// Query: SELECT ... FROM "Order" WHERE "userId" IN ($1,$2,$3,...)
// Duration: 3ms
// Query: SELECT ... FROM "OrderItem" WHERE "orderId" IN ($1,$2,...)
// Duration: 4ms
//
// This is the "query" strategy. Each query is independently fast.

// DIAGNOSTIC: Compare total duration of all "query" strategy queries
// vs the single "join" strategy query. If the join takes more than
// 2-3x the sum of the separate queries, it is not the right strategy.

The telltale sign of the "join" strategy is a single query containing multiple LEFT JOIN LATERAL clauses. The telltale sign of the "query" strategy is a series of simple SELECT queries with IN clauses. If you see both patterns in the same application — some endpoints using "join" and others using "query" — that is either intentional (good) or accidental (investigate).

Step 3: Compare execution times

Run the same endpoint or query with each strategy and compare the total duration. The difference is often obvious — 200ms versus 15 seconds does not require statistical analysis. If you need to toggle the strategy without changing code, you can set relationLoadStrategy in a Prisma middleware that reads from an environment variable.

Step 4: Run EXPLAIN ANALYZE on the generated SQL

Copy the SQL from Prisma's query log, paste it into psql or your database client, and prefix it with EXPLAIN (ANALYZE, BUFFERS). Look for these indicators:

  • Nested Loop Left Join with high loop countsloops=2034 or loops=8247 means the inner scan executed thousands of times
  • Actual time diverges from estimated time — the planner thought the query would take 500ms; it took 15 seconds
  • High buffer read countsBuffers: read=3201 means 3,201 page reads from disk, indicating cache thrashing
  • Rows estimate significantly offrows=500 estimated vs rows=8247 actual means the planner underestimated the join multiplicity

Our EXPLAIN ANALYZE guide covers how to read these plans in detail. For the specific case of LATERAL JOIN pathology, the loop count is your primary diagnostic: if any node shows loops in the thousands, you have found the problem.

Step 5: Check for skip/take + nested includes

Search your codebase for findMany calls that combine skip/take with multi-level include. These are the highest-risk queries. If any of them use relationLoadStrategy: "join" — or if they use the default and your Prisma version defaults to "join" — switch them to "query" immediately. This change is safe, backwards-compatible, and requires no schema modifications.

Step 6: Monitor with Prisma Metrics

Prisma Metrics for production monitoring
// Prisma Metrics (4.6+) — track query performance in production
const prisma = new PrismaClient({
  // Enable metrics
});

// Periodically export metrics
const metrics = await prisma.$metrics.json();

// Key metrics to watch:
// - prisma_client_queries_duration_milliseconds_bucket
//   Histogram of query durations. Look for outliers > 1000ms.
//
// - prisma_client_queries_total
//   Total query count. With "query" strategy, expect 3-6x more
//   queries than with "join" — but each should be fast.
//
// - prisma_datasource_queries_duration_milliseconds_bucket
//   Duration at the datasource (PostgreSQL) level, excluding
//   Prisma engine overhead. If this is low but client duration
//   is high, the bottleneck is in Prisma's result merging.

In production, you cannot run EXPLAIN ANALYZE on every query. Prisma Metrics provides aggregate visibility into query durations, query counts, and datasource-level timing. Export these to your monitoring system and set alerts on the duration histograms. A sudden spike in the p99 query duration after a Prisma upgrade is a strong signal that the default strategy has changed.

Why does PostgreSQL struggle with LATERAL JOINs here?

I want to be precise about this, because I have seen the question answered incorrectly in several places. This is not a PostgreSQL bug. This is not a limitation of PostgreSQL's query planner. This is the correct execution of a query that asks for an inherently expensive operation.

A LATERAL subquery is, by definition, correlated — it depends on values from preceding tables. This means PostgreSQL cannot use hash joins or merge joins for the LATERAL portion. It must use nested loops, evaluating the subquery once per outer row. For a single LATERAL, this is efficient. For four nested LATERALs on one-to-many relations, the loop count grows multiplicatively.

The planner does its best. It uses indexes for the inner scans. It estimates row counts and chooses scan strategies accordingly. But it is solving the problem it was given, and the problem it was given — "correlate four tables through nested per-row subqueries" — has inherently multiplicative cost. A planner cannot optimize away the fundamental structure of the query. It can only choose the fastest way to execute that structure.

When the actual cardinality is higher than estimated — and with nested one-to-many relations, it often is — the plan becomes expensive in ways the planner did not anticipate. PostgreSQL estimates row counts using column statistics (histograms, most common values, correlation). For a simple WHERE user_id = 42, these statistics are excellent. For a four-level nested join where the cardinality at each level depends on the cardinality at the previous level, the estimates compound in error. A 2x overestimate at each of four levels produces a 16x total overestimate — or underestimate, depending on direction.

The "query" strategy sidesteps this entirely. Each query is independent, non-correlated, and uses IN clauses that PostgreSQL handles with bitmap scans or index scans — well-understood operations with predictable performance characteristics. The planner does not need to estimate join multiplicities because there are no joins. Each query's cost is a function of the IN clause size, which Prisma controls precisely.

In some respects, Prisma's old multi-query approach was an accidental optimization. It avoided a class of planner pathology that the "more efficient" single-query approach walked directly into. There is a lesson here about simplicity: sometimes the approach that looks less sophisticated performs better precisely because it avoids the complexity that sophisticated approaches must manage.

What about Prisma's defaults and future direction?

As of Prisma 5.x and into 6.x, the "join" strategy has been promoted as the preferred approach — it became the default for PostgreSQL. The Prisma documentation emphasizes the reduced round-trip advantage and positions "join" as the more efficient option.

This is true in the narrow sense. Fewer round trips is, all else being equal, better. But all else is not equal when nested LATERAL JOINs on one-to-many relations produce 11,000 index lookups in a single query. The documentation presents a valid optimization principle — fewer round trips — and applies it to a situation where a different concern — join multiplicity — dominates by three orders of magnitude.

I do not say this to criticize the Prisma team. Building an ORM that generates efficient SQL across multiple databases, multiple schema shapes, and multiple usage patterns is genuinely difficult work. The "join" strategy is correct for many queries, and the principle of reducing round trips is sound. The issue is that the default should be chosen based on the worst-case scenario, not the best-case scenario. A default that occasionally wastes 5ms of network overhead is vastly preferable to a default that occasionally causes 15-second regressions.

The Prisma team is aware of these regression reports. The GitHub discussions include multiple reports of 10-15 second regressions on queries that completed in under a second with the "query" strategy. Until Prisma introduces more sophisticated heuristics — perhaps automatically choosing "query" when it detects deep nesting or pagination, or providing per-model defaults — the burden falls on you to choose explicitly.

My recommendation: set relationLoadStrategy: "query" as your default for findMany queries with nested includes. Override to "join" for specific queries where you have confirmed it performs better — typically findUnique calls with shallow, 1:1 relations. This is the conservative approach, and in performance work, conservative defaults that prevent catastrophic regressions are worth more than optimistic defaults that occasionally save a few milliseconds.

Or, if you prefer: dress for the weather that can kill you, not the weather you hope for.

A note on other ORMs and the universality of this problem

Lest you conclude that this is a Prisma-specific problem, I should note that every ORM that loads relations faces a version of this decision. The trade-off between "one complex query" and "many simple queries" is fundamental to relational data access.

Sequelize produces cartesian explosions with its include option — the same row duplication problem, without the LATERAL syntax. SQLAlchemy's joinedload versus selectinload is the same fork in the road: one query with joins, or multiple queries with IN clauses. GORM's Preload versus Joins is the same decision in Go. Eloquent's eager loading uses the multi-query approach by default, which is why Laravel applications rarely hit this particular pathology.

The pattern is universal because the underlying physics is universal. Joining one-to-many relations produces row multiplication. The more levels of one-to-many nesting, the more multiplication. No amount of ORM cleverness can change the fact that 500 users with 4 orders with 4 items produces 8,000 rows in a joined result set. The only question is whether the ORM addresses this multiplication at the SQL level (one query, many rows) or at the application level (many queries, no duplication).

Prisma's relationLoadStrategy is notable not because the problem is unique, but because it gives you an explicit, per-query toggle. Most ORMs make this an architectural decision — you choose a loading strategy at the relationship level, not at the query level. Prisma's approach is more flexible. It also means you have more opportunities to choose wrong.

Where a proxy helps with ORM-generated SQL

Prisma generates the SQL. You choose the strategy. PostgreSQL executes whatever arrives. But there is a gap between what Prisma produces and what PostgreSQL needs — and that gap is where performance lives.

// Your Prisma application sends this to PostgreSQL:
const orders = await prisma.order.findMany({
  relationLoadStrategy: "join",
  include: { items: { include: { product: true } } }
});

// Gold Lapel sees the resulting LATERAL JOIN query,
// detects the nested loop pattern, and can:
//
// 1. Create targeted indexes on the join predicates
//    if they are missing
//
// 2. Identify when the LATERAL approach is producing
//    excessive intermediate rows and flag it
//
// 3. Auto-materialize repeated expensive join patterns
//    into materialized views
//
// Prisma does not know Gold Lapel exists.
// PostgreSQL does not know Prisma exists.
// Gold Lapel sees both sides.

Gold Lapel sits in that gap. It observes the actual SQL that Prisma sends to PostgreSQL — whether it is a single LATERAL JOIN query or five separate IN-clause queries — and optimizes at the wire protocol level.

If the LATERAL JOIN query is hitting an unindexed predicate, Gold Lapel creates the index. If the same expensive join pattern repeats across thousands of requests, Gold Lapel can materialize it. If the "query" strategy's separate queries would benefit from a composite index that does not exist, Gold Lapel detects the pattern and creates one.

The point is not to choose between "join" and "query" for you — that is an application-level decision that depends on your schema, your data volumes, and your access patterns. The point is to ensure that whichever strategy you choose, the SQL it generates runs against a database that is properly indexed and optimized for the actual traffic patterns. Prisma does not know Gold Lapel is there. PostgreSQL does not know Prisma is there. Gold Lapel sees both sides, and that vantage point is where the optimization happens.

The decision, distilled

You have arrived with a query that takes 15 seconds. You will leave with one that takes 10 milliseconds. The change is a single property on a JavaScript object: relationLoadStrategy: "query".

The deeper lesson, if you'll permit me, is that fewer queries is not always fewer work. A single query that asks PostgreSQL to correlate four tables through nested per-row subqueries is more work — dramatically more work — than five simple queries that each scan an index once. The network overhead of extra round trips is measured in milliseconds. The CPU cost of nested loop multiplication is measured in seconds. When those two costs compete, the seconds win. Every time.

Choose "query" as your default for findMany with nested includes. Override to "join" only for findUnique calls with shallow, bounded relations. Audit your paginated endpoints immediately. And when in doubt, run EXPLAIN ANALYZE — not because you need to, but because a well-read query plan is the surest path to a well-run household.

The database was not slow. It was being asked poorly. And now it shall be asked properly.

Frequently asked questions

Terms referenced in this article

If the question of ORM-generated query performance interests you more broadly, I have written a rather thorough comparison of ORM versus raw SQL — covering exactly the kind of planner surprises that Prisma's LATERAL JOINs expose.