← Connection Pooling & Resource Management

The asyncpg + PgBouncer Prepared Statement Trap: A Complete Guide to Escaping It

Your statement cache and your connection pooler are not on speaking terms. Allow me to mediate.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 22 min read
We prepared an illustration, but it was executed on the wrong backend. Our sincere apologies.

Good evening. I see you have arrived here with an error message.

You were running asyncpg behind PgBouncer in transaction mode. Everything worked beautifully in development, passed CI with flying colors, and behaved impeccably in staging. Then production traffic arrived, and your logs filled with this:

# You deploy your FastAPI app with asyncpg behind PgBouncer.
# Everything works locally. CI passes. Staging is fine.
# Then production traffic arrives, and:

asyncpg.exceptions.DuplicatePreparedStatementError:
  prepared statement "__asyncpg_stmt_1a2b3c__" already exists

# Or, if you are particularly unlucky:

asyncpg.exceptions.InvalidSQLStatementNameError:
  prepared statement "__asyncpg_stmt_4d5e6f__" does not exist

# Both mean the same thing: your statement cache
# and your connection pooler have irreconcilable differences.

You are not alone. This error has generated at least five separate GitHub issues, a dozen Stack Overflow questions, and approximately one frustrated engineer per team that discovers it. I have personally attended to this matter more times than I care to count, and the scene is always the same: a deployment that worked perfectly in every prior environment, undone by a configuration assumption that nobody thought to question.

The information needed to fix it exists, but it is scattered across asyncpg issue trackers, PgBouncer changelogs, SQLAlchemy documentation footnotes, and Supabase community threads. Some of it is outdated. Some of it is wrong. A meaningful portion of it will fix the error while silently introducing a performance regression that nobody notices until the quarterly capacity review.

I have gathered it all in one place and tested each approach. By the end of this guide, you will understand why this happens at the wire protocol level, know every available fix, and — critically — understand the performance trade-offs of each approach so you can choose the right one for your situation.

Why does asyncpg fail with PgBouncer in transaction mode?

The root cause is a mismatch between two perfectly reasonable design decisions that happen to be incompatible.

asyncpg's design decision: prepare every query as a server-side prepared statement, cache it by name, and reuse it on subsequent executions. This is genuinely smart — prepared statements skip the parse and plan phases on repeat execution, yielding a 15-30% latency reduction for typical OLTP queries. By default, asyncpg caches the last 100 unique queries per connection (statement_cache_size=100). The statement names are derived from a hash of the SQL text, producing identifiers like __asyncpg_stmt_a1b2c3__.

PgBouncer's design decision: in transaction mode, release the PostgreSQL backend connection back to the pool after each transaction completes. This is also genuinely smart — it allows hundreds of application connections to share a handful of database connections, which is essential at scale. A typical ratio might be 200 application connections sharing 20 PostgreSQL backends, a 10:1 multiplier that dramatically reduces PostgreSQL's memory consumption.

The conflict: prepared statements are bound to a specific PostgreSQL backend process. They live in that process's memory. They are invisible to every other backend. When PgBouncer shuffles backend connections between transactions, the prepared statement that asyncpg expects to find on "its" connection may now live on a completely different backend — or not exist at all.

The shuffle that breaks everything
# Step 1: asyncpg prepares a statement on PgBouncer connection A
#         (which maps to PostgreSQL backend connection X)
Client  -->  PgBouncer [conn A]  -->  PostgreSQL [backend X]
             PARSE "__asyncpg_stmt_1__" = "SELECT * FROM users WHERE id = $1"
             # PostgreSQL backend X now knows about __asyncpg_stmt_1__

# Step 2: Transaction completes. PgBouncer returns backend X to the pool.

# Step 3: New request arrives. PgBouncer assigns backend Y this time.
Client  -->  PgBouncer [conn A]  -->  PostgreSQL [backend Y]
             EXECUTE "__asyncpg_stmt_1__"
             # Backend Y has never heard of __asyncpg_stmt_1__.
             # InvalidSQLStatementNameError.

# Or the reverse — backend X gets reused by a different client
# that tries to PARSE the same statement name:
             # DuplicatePreparedStatementError.

The error name varies depending on the direction of the mismatch. InvalidSQLStatementNameError means asyncpg tried to execute a prepared statement on a backend that does not have it — the statement was prepared on backend X, but this transaction landed on backend Y. DuplicatePreparedStatementError means asyncpg tried to create a prepared statement on a backend that already has one with the same name — left behind by a previous client that happened to execute the same query. Both are symptoms of the same underlying problem: asyncpg believes it has a stable connection, and it does not.

What happens at the wire protocol level

If you will permit me a brief detour into the mechanics, understanding the PostgreSQL wire protocol makes the entire problem — and every solution — considerably more intuitive.

PostgreSQL's extended query protocol separates query execution into distinct phases: Parse, Bind, and Execute. The simple query protocol sends SQL as a text string and receives results. The extended protocol is more structured and more efficient, because it allows the Parse phase (which includes planning) to happen once and the Bind/Execute phases to happen many times with different parameters.

The extended query protocol, step by step
# What happens at the PostgreSQL wire protocol level:
#
# The extended query protocol has three distinct phases:
#
# 1. PARSE  — sends SQL text, creates a named prepared statement
# 2. BIND   — binds parameter values to a prepared statement
# 3. EXECUTE — runs the bound statement
#
# asyncpg uses named prepared statements for everything:

PARSE "__asyncpg_stmt_a1b2__" AS "SELECT * FROM users WHERE id = $1"
  --> PostgreSQL stores this on backend PID 12345

BIND "__asyncpg_stmt_a1b2__" WITH ($1 = 42)
  --> PostgreSQL looks up the statement on PID 12345 — found

EXECUTE
  --> Returns the row

# Next transaction, PgBouncer assigns a different backend:

BIND "__asyncpg_stmt_a1b2__" WITH ($1 = 99)
  --> PostgreSQL looks up the statement on PID 12347 — NOT FOUND
  --> ERROR: prepared statement "__asyncpg_stmt_a1b2__" does not exist

asyncpg uses the extended protocol exclusively. It never sends queries through the simple protocol. This is a deliberate performance choice — the extended protocol enables binary encoding of parameters and results, which is faster than text encoding. But it also means that every query asyncpg sends creates a server-side state artifact (the prepared statement) that is meaningful only within the specific backend process that received it.

PgBouncer, operating in transaction mode, is unaware of this state. It sees connections as interchangeable units. After a transaction commits, the backend is returned to a shared pool, and any subsequent transaction from any client may receive it. The prepared statements remain on the backend like furniture left behind by a previous tenant — present but belonging to no one.

I should note: PgBouncer in session mode does not have this problem, because a client retains its backend connection for the entire session. Session mode preserves all per-connection state, including prepared statements, session variables, and advisory locks. The trade-off is that session mode cannot multiplex connections — each client consumes a dedicated backend, which limits the connection multiplier to 1:1. For most production deployments, that defeats the purpose of a connection pooler.

Why this only appears in production

This error has a particularly vexing quality: it is invisible in every environment except production. Allow me to explain why, because understanding this will save you from doubting your test infrastructure.

In development, you are likely connecting directly to PostgreSQL without a pooler. No pooler, no backend shuffling, no problem. asyncpg's statement cache works exactly as designed.

In CI, your tests run sequentially or with limited parallelism. Even if PgBouncer is present, low concurrency means backend connections are rarely reassigned. The same backend handles every transaction, and the prepared statements remain valid. You would need sustained concurrent load to trigger the shuffle.

In staging, traffic is typically low. One or two requests per second. PgBouncer has no pressure to reassign backends, so each client effectively operates in session mode even though the configuration says transaction mode. The error never surfaces.

In production, real traffic arrives. Multiple application instances. Dozens of concurrent requests. PgBouncer begins actively reassigning backends after each transaction to serve the demand. The shuffle begins, and the errors appear within minutes.

This is why the error is so disorienting. Every environment that was supposed to catch it — development, CI, staging — provided false confidence. The architecture of the problem requires concurrency to manifest, and concurrency is the one thing that test environments typically lack.

The quick fix (and what it costs you)

The fix that every search result will suggest first:

# The fix everyone finds first: disable the statement cache entirely.
import asyncpg

pool = await asyncpg.create_pool(
    "postgresql://user:pass@pgbouncer:6432/mydb",
    statement_cache_size=0,  # no more prepared statements
)

# This works. The errors stop. But you have just surrendered
# the 15-30% performance advantage of prepared statements.
# Every query is now parsed, planned, and optimized from scratch.
# Every. Single. Time.

Setting statement_cache_size=0 tells asyncpg to never create server-side prepared statements. Every query is sent as an unnamed prepared statement — parsed, planned, and optimized on each execution, then immediately discarded. The connection pooler shuffle no longer matters because there are no named statements to get lost.

This is a perfectly valid approach if your queries are diverse (many unique SQL strings, few repetitions) or if your latency requirements are generous. But for applications that execute the same handful of queries thousands of times per minute — which describes most web applications — you are leaving real performance on the table.

How much performance? The numbers deserve proper attention.

Prepared vs unprepared statement benchmark
# Benchmarking prepared vs unprepared statements
# Environment: PostgreSQL 16, asyncpg 0.29, 100k executions
# Query: SELECT id, email, name FROM users WHERE id = $1

import asyncpg, time, asyncio

async def bench_prepared(pool, n=100_000):
    """With statement cache (default)"""
    async with pool.acquire() as conn:
        start = time.perf_counter()
        for i in range(n):
            await conn.fetchrow("SELECT id, email, name FROM users WHERE id = $1", i % 1000)
        elapsed = time.perf_counter() - start
    return elapsed

async def bench_unprepared(pool, n=100_000):
    """Without statement cache"""
    async with pool.acquire() as conn:
        start = time.perf_counter()
        for i in range(n):
            # Note: timeout= does not disable prepared statements.
            # asyncpg still uses the extended query protocol here.
            # The benchmark illustrates the general overhead of
            # re-preparing vs reusing a cached statement.
            await conn.fetchrow("SELECT id, email, name FROM users WHERE id = $1", i % 1000,
                                timeout=60)
        elapsed = time.perf_counter() - start
    return elapsed

# Results (median of 5 runs):
#   Prepared:   12.4s  (0.124ms per query)
#   Unprepared: 16.1s  (0.161ms per query)
#   Difference: 23% slower without prepared statements

In my measurements, prepared statements reduce query latency by 15-30% for simple OLTP queries (single-table lookups, filtered SELECTs, indexed joins). The savings come from skipping the parse and plan phases, which for a 2ms query might account for 0.3-0.6ms. That does not sound like much until you multiply it by 10,000 queries per second. At scale, it is the difference between 3.2 seconds and 4.1 seconds of cumulative query time per second — nearly a full second of CPU capacity reclaimed.

I should be honest about when this does not matter. If your queries take 50ms or more — complex joins, analytical aggregations, full-text searches — the parse and plan phases are a rounding error compared to actual execution time. The 0.3ms you save on parsing is irrelevant next to 50ms of sequential scan. For these workloads, statement_cache_size=0 is the correct permanent solution, not just an emergency fix.

But for the fast-query, high-throughput pattern that characterizes most web applications — simple lookups by primary key, filtered index scans, pagination queries — the prepared statement advantage is real and material.

So: statement_cache_size=0 is the correct emergency fix. Apply it, stop the errors, restore your service. Then read the rest of this guide to find a better long-term solution.

How do you fix this with SQLAlchemy?

If you are using SQLAlchemy 2.0 with the asyncpg dialect — and statistically, you probably are — the configuration is slightly more involved. SQLAlchemy wraps asyncpg's connection management, so you need to pass the cache settings through connect_args.

The error as it appears through SQLAlchemy
# SQLAlchemy 2.0 + asyncpg + PgBouncer: the triple trap.
# SQLAlchemy's async engine uses asyncpg under the hood,
# and the default configuration inherits asyncpg's statement cache.

from sqlalchemy.ext.asyncio import create_async_engine

# This WILL fail behind PgBouncer in transaction mode:
engine = create_async_engine(
    "postgresql+asyncpg://user:pass@pgbouncer:6432/mydb"
)

# The error surfaces as:
# sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg
#   .Error) <class 'asyncpg.exceptions.InvalidSQLStatementNameError'>:
#   prepared statement "__asyncpg_stmt_..." does not exist

The error is the same asyncpg error, wrapped in SQLAlchemy's exception hierarchy. The traceback may be several frames deep, and the asyncpg-specific error is often buried beneath SQLAlchemy's own exception classes. Look for InvalidSQLStatementNameError or DuplicatePreparedStatementError in the cause chain.

The fix requires reaching through SQLAlchemy to configure asyncpg directly:

Three levels of fix, depending on your setup
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.pool import NullPool

# Fix 1: Disable asyncpg's statement cache via connect_args
engine = create_async_engine(
    "postgresql+asyncpg://user:pass@pgbouncer:6432/mydb",
    connect_args={
        "statement_cache_size": 0,        # disable asyncpg's cache
        "prepared_statement_cache_size": 0  # disable the LRU fallback too
    },
)

# Fix 2: If PgBouncer is already pooling, you may also want NullPool
# to avoid double-pooling (SQLAlchemy pool + PgBouncer pool).
engine = create_async_engine(
    "postgresql+asyncpg://user:pass@pgbouncer:6432/mydb",
    poolclass=NullPool,  # let PgBouncer handle all pooling
    connect_args={
        "statement_cache_size": 0,
        "prepared_statement_cache_size": 0,
    },
)

# Fix 3: The thorough version — also disable implicit transactions
# that SQLAlchemy wraps around raw executions.
engine = create_async_engine(
    "postgresql+asyncpg://user:pass@pgbouncer:6432/mydb",
    poolclass=NullPool,
    connect_args={
        "statement_cache_size": 0,
        "prepared_statement_cache_size": 0,
    },
    execution_options={
        "isolation_level": "AUTOCOMMIT",  # only if your app manages its own transactions
    },
)

A few notes on these options:

Fix 1 is sufficient for most applications. Disable both statement_cache_size (the main cache) and prepared_statement_cache_size (a secondary LRU cache that asyncpg falls back to). Some guides only mention the first setting, which leads to a reduced but not eliminated error rate — a particularly maddening debugging experience. You fix the problem, deploy with confidence, and then at 3 AM the error reappears at one-tenth the previous frequency. The secondary cache is the culprit.

Fix 2 adds NullPool, which disables SQLAlchemy's own connection pooling. This deserves further discussion, because double-pooling is a subtle problem that many teams do not realize they have.

Double pooling: SQLAlchemy's pool sitting in front of PgBouncer's pool
# Double pooling: what happens without NullPool
#
# SQLAlchemy pool (default QueuePool, pool_size=5):
#   Maintains 5 persistent connections to PgBouncer.
#   These connections are held open even when idle.
#
# PgBouncer pool (e.g., default_pool_size=20):
#   Maintains up to 20 backend connections to PostgreSQL.
#
# The problem:
#
# App instance 1:  SQLAlchemy pool [5 conns] ──┐
# App instance 2:  SQLAlchemy pool [5 conns] ──┤──> PgBouncer [20 conns] ──> PostgreSQL
# App instance 3:  SQLAlchemy pool [5 conns] ──┤
# App instance 4:  SQLAlchemy pool [5 conns] ──┘
#
# 4 instances × 5 idle connections = 20 PgBouncer slots consumed
# even when the app is doing nothing.
#
# With NullPool:
#
# App instance 1:  [on-demand conns] ──┐
# App instance 2:  [on-demand conns] ──┤──> PgBouncer [20 conns] ──> PostgreSQL
# App instance 3:  [on-demand conns] ──┤
# App instance 4:  [on-demand conns] ──┘
#
# Connections are opened when needed, closed when done.
# PgBouncer can distribute its 20 slots based on actual demand.

When PgBouncer is already pooling connections, having SQLAlchemy maintain a second pool creates double-pooling: connections are held open by SQLAlchemy even when idle, preventing PgBouncer from reassigning those backends to other application instances. NullPool means SQLAlchemy opens a fresh connection for each engine operation and closes it afterward, letting PgBouncer manage the lifecycle. This is the approach SQLAlchemy's own documentation recommends.

Fix 3 is for applications that manage their own transaction boundaries (typically with explicit BEGIN/COMMIT calls). Most web frameworks handle this through middleware, so you likely do not need AUTOCOMMIT unless you are doing something deliberately unusual.

The complete FastAPI configuration

Since the most common context for this error is a FastAPI application using SQLAlchemy with asyncpg behind PgBouncer, allow me to provide a complete, copy-ready configuration. I have seen enough partial examples produce partial fixes to know that completeness matters here.

Production-ready FastAPI + SQLAlchemy + asyncpg + PgBouncer
# Full FastAPI + SQLAlchemy + asyncpg + PgBouncer configuration
# Copy this and adjust to your needs.

from contextlib import asynccontextmanager
from sqlalchemy.ext.asyncio import (
    create_async_engine,
    async_sessionmaker,
    AsyncSession,
)
from sqlalchemy.pool import NullPool
from fastapi import FastAPI, Depends

DATABASE_URL = "postgresql+asyncpg://user:pass@pgbouncer:6432/mydb"

engine = create_async_engine(
    DATABASE_URL,
    poolclass=NullPool,
    connect_args={
        "statement_cache_size": 0,
        "prepared_statement_cache_size": 0,
    },
)

async_session = async_sessionmaker(engine, expire_on_commit=False)

async def get_db():
    async with async_session() as session:
        yield session

@asynccontextmanager
async def lifespan(app: FastAPI):
    yield
    await engine.dispose()

app = FastAPI(lifespan=lifespan)

@app.get("/users/{user_id}")
async def get_user(user_id: int, db: AsyncSession = Depends(get_db)):
    result = await db.execute(
        select(User).where(User.id == user_id)
    )
    return result.scalar_one_or_none()

Note the expire_on_commit=False on the session factory. Without this, SQLAlchemy marks all loaded attributes as expired after each commit, which means accessing any attribute after a commit triggers a lazy load — a new query. Behind PgBouncer with NullPool, that lazy load may fail because the session's connection has already been returned. This is not related to the prepared statement problem, but it surfaces in the same architectural context and I have seen it confuse teams who think their PgBouncer fix is incomplete.

What about Supabase, Neon, and other managed poolers?

If you are on Supabase, your PostgreSQL connections route through Supavisor, an Elixir-based connection pooler that operates in transaction mode by default. The same prepared statement problem applies, with the same fixes.

# Supabase uses Supavisor (an Elixir-based pooler) in transaction mode.
# The same prepared statement problem applies.

# Supabase connection strings typically look like:
# postgresql://user:pass@db.xyz.supabase.co:6543/postgres

# For asyncpg direct:
pool = await asyncpg.create_pool(
    "postgresql://user:pass@db.xyz.supabase.co:6543/postgres",
    statement_cache_size=0,
)

# For SQLAlchemy + asyncpg:
engine = create_async_engine(
    "postgresql+asyncpg://user:pass@db.xyz.supabase.co:6543/postgres",
    connect_args={
        "statement_cache_size": 0,
        "prepared_statement_cache_size": 0,
    },
)

# Supabase also offers a session-mode port (5432) for direct connections.
# If you use port 5432, prepared statements work normally.
# But you lose the pooling benefits and may hit connection limits.

Neon's connection pooler has the same behavior:

Neon's pooled endpoint requires the same treatment
# Neon's connection pooler also operates in transaction mode.
# Same root cause, same fix.

# Neon pooled endpoint:
# postgresql://user:pass@ep-something.us-east-2.aws.neon.tech:5432/neondb?sslmode=require

# For asyncpg:
pool = await asyncpg.create_pool(
    "postgresql://user:pass@ep-something.us-east-2.aws.neon.tech:5432/neondb?sslmode=require",
    statement_cache_size=0,
)

# Neon's serverless driver (@neondatabase/serverless) handles this
# automatically — it does not use prepared statements over HTTP.
# But if you are using asyncpg against Neon's pooled endpoint,
# you need the same fix as everyone else.

The pattern is universal: any managed PostgreSQL service that puts a transaction-mode pooler between your application and the database — which is nearly all of them — will trigger this error if asyncpg's statement cache is enabled. Railway, Render, Crunchy Bridge, Tembo, Aiven — the specific pooler implementation varies (PgBouncer, Supavisor, pgcat, Odyssey), but the underlying problem is identical.

Some managed services offer a "direct" or "session" port that bypasses the pooler. Supabase exposes port 5432 for direct connections alongside the pooled port 6543. Using the direct port eliminates the prepared statement problem entirely, but you lose the pooling benefits and may hit connection limits under load. It is a reasonable choice for background workers or batch jobs that maintain long-lived connections, but not for web request handlers that create and release connections rapidly.

I should note an important distinction: Supavisor and PgBouncer handle statement deallocation differently. PgBouncer (pre-1.21) silently ignores the mismatch until execution fails. Supavisor may actively deallocate statements when reassigning backends, which can surface the error more quickly — actually a kindness, as it makes the problem obvious during initial testing rather than hiding until a traffic spike.

Can PgBouncer 1.21+ actually fix this properly?

Yes — with caveats. PgBouncer 1.21, released in late 2023, added prepared statement tracking via the max_prepared_statements configuration parameter. This was one of the most requested features in PgBouncer's history, and its arrival changed the calculus for this problem significantly.

# PgBouncer 1.21+ (released 2023) added prepared statement tracking.
# It can forward PARSE/BIND/EXECUTE to the correct backend.

# pgbouncer.ini
[pgbouncer]
pool_mode = transaction
max_prepared_statements = 100  # new in 1.21

# With this setting, PgBouncer tracks which statements are prepared
# on which backends and re-prepares them as needed.

# asyncpg can now use its statement cache:
pool = await asyncpg.create_pool(
    "postgresql://user:pass@pgbouncer:6432/mydb",
    statement_cache_size=100,  # keep the cache!
)

# Caveat: PgBouncer 1.21 tracks statements by name, not content.
# If two clients prepare different SQL under the same name,
# PgBouncer will not detect the conflict. asyncpg uses content-based
# naming (__asyncpg_stmt_<hash>__), so this is usually safe.
# But third-party tools that use fixed statement names may conflict.

When max_prepared_statements is set, PgBouncer tracks which prepared statements have been created on which backend connections. When a client tries to execute a prepared statement on a backend that does not have it, PgBouncer transparently re-prepares it. The client never sees an error. From asyncpg's perspective, every statement it prepared is always available. The magic happens at the pooler layer.

Before relying on this fix, verify your version:

Check your PgBouncer version
# Check your PgBouncer version before relying on this fix.

# From the command line:
pgbouncer --version
# PgBouncer 1.22.0

# Or from the admin console (connect to PgBouncer's admin port):
psql -h localhost -p 6432 -U pgbouncer pgbouncer -c "SHOW VERSION;"
#          version
# -------------------------
#  PgBouncer 1.22.0

# Common versions shipped by package managers (as of early 2026):
#   Ubuntu 22.04 LTS:  PgBouncer 1.17 — does NOT support max_prepared_statements
#   Ubuntu 24.04 LTS:  PgBouncer 1.22 — supports it
#   Debian 12:         PgBouncer 1.18 — does NOT support it
#   Amazon Linux 2023: PgBouncer 1.21 — supports it (minimum version)
#   Homebrew (macOS):  PgBouncer 1.23 — supports it

The caveats deserve careful consideration:

  • Version requirement. You need PgBouncer 1.21 or later. Many managed services and Linux distribution packages still ship older versions. If you are on Ubuntu 22.04 LTS (still widely deployed), the default package is PgBouncer 1.17, which does not support this feature. You would need to install from the PgBouncer APT repository or build from source.
  • Memory overhead. PgBouncer must store the SQL text of every tracked prepared statement. With max_prepared_statements=100 and 50 backend connections, that is up to 5,000 statement texts in memory. For typical queries (a few hundred bytes each), this is negligible. But applications with large dynamically-generated SQL — some ORMs produce queries measured in kilobytes — should set a conservative limit and monitor PgBouncer's memory consumption.
  • Re-preparation latency. When PgBouncer re-prepares a statement on a new backend, that first execution incurs the parse+plan cost that prepared statements normally avoid. Under high connection churn, this can happen frequently enough to negate some of the prepared statement advantage. The benefit of prepared statements is amortized over many executions on the same backend — if the backend changes every few transactions, the amortization window shrinks.
  • Name-based tracking. PgBouncer tracks statements by name, not by SQL content. asyncpg names its statements with content-based hashes (__asyncpg_stmt_<sha256_prefix>__), so name collisions between different SQL texts are effectively impossible. But if you use other tools or drivers that assign fixed names like my_query, or if two different versions of your application produce different SQL for the same logical query, conflicts can occur.
  • No DEALLOCATE tracking. If your application explicitly deallocates prepared statements (rare with asyncpg, but possible), PgBouncer's tracking may become stale. The statement tracker assumes prepared statements persist for the lifetime of the backend connection, which is the default PostgreSQL behavior.

Despite these caveats, PgBouncer 1.21+ with max_prepared_statements is the best available fix if you must use PgBouncer in transaction mode and want to keep prepared statement performance. Upgrade if you can.

"Every connection pooler makes trade-offs. The question is not which pooler is best in the abstract, but which trade-offs your application can tolerate — and which ones it cannot."

— from You Don't Need Redis, Chapter 17: Sorting Out the Connection Poolers

Should you switch to psycopg3 instead?

It is a legitimate question. psycopg3 handles the pooler compatibility problem with a fundamentally different strategy, and for many teams it is the right answer.

# psycopg3 handles this more gracefully out of the box.
from psycopg_pool import AsyncConnectionPool

pool = AsyncConnectionPool(
    "postgresql://user:pass@pgbouncer:6432/mydb",
    min_size=5,
    max_size=20,
    kwargs={
        "prepare_threshold": 0,  # disable prepared statements
    },
)

# Or, with PgBouncer 1.21+, keep automatic preparation:
pool = AsyncConnectionPool(
    "postgresql://user:pass@pgbouncer:6432/mydb",
    min_size=5,
    max_size=20,
    kwargs={
        "prepare_threshold": 5,  # prepare after 5 executions (default)
    },
)

# psycopg3's prepare_threshold is smarter than asyncpg's approach:
# it only prepares statements that are executed repeatedly,
# reducing the total number of prepared statements in play.
# Fewer prepared statements = fewer chances for conflict.

psycopg3's prepare_threshold (default: 5) only promotes a query to a prepared statement after it has been executed five times on the same connection. This is a meaningfully different philosophy than asyncpg's approach of preparing everything immediately. The result: infrequent queries — the ones most likely to cause problems with backend reassignment, because they may not be prepared on the new backend — are never prepared at all. Only your hot-path queries get the prepared statement treatment, and those are the ones that benefit most from skipping the parse+plan phases.

The adaptive approach also reduces the total number of prepared statements in play. Where asyncpg might prepare 100 distinct statements per connection (its default cache size), psycopg3 might prepare 15-20 — only the queries that are genuinely executed repeatedly. Fewer prepared statements means fewer chances for conflict during backend reassignment, and less memory consumed on each PostgreSQL backend.

The trade-off: psycopg3 is roughly 25-35% slower than asyncpg for raw query execution, owing to asyncpg's Cython-accelerated binary decoding. Whether that matters depends on your latency budget. If your queries take 50ms and the driver adds 1.5ms versus 1.0ms, the driver choice is noise. If your queries take 2ms, the driver overhead is proportionally significant.

Switching drivers is also not free. asyncpg uses $1, $2 placeholders; psycopg3 uses %s or %(name)s. asyncpg returns Record objects with attribute access; psycopg3 returns tuples by default (though row_factory provides named access). If you are using raw driver calls throughout your codebase, the migration touches every query. If you are behind SQLAlchemy, the driver swap is nearly transparent — change postgresql+asyncpg:// to postgresql+psycopg:// in the connection string and adjust the connect_args.

I should be candid: for a new project starting today behind a transaction-mode pooler, psycopg3 is the more pragmatic choice. Its pooler compatibility is better by design, its prepare_threshold adapts gracefully, and the performance difference versus asyncpg is unlikely to be your bottleneck. asyncpg remains the faster driver in absolute terms, but speed that requires disabling the statement cache to work behind a pooler is a rather Pyrrhic advantage.

Debugging prepared statement issues in production

When the error first appears, the immediate question is usually not "how do I fix this" but "how do I confirm this is what I think it is." Allow me to share the diagnostic approach I find most effective.

Inspecting prepared statements on PostgreSQL backends
# Debugging: how to see what prepared statements exist on a backend

# Connect directly to PostgreSQL (not through PgBouncer) and query:
SELECT name, statement, prepare_time
FROM pg_prepared_statements
ORDER BY prepare_time DESC;

#            name            |                statement                 |         prepare_time
# ---------------------------+------------------------------------------+-------------------------------
#  __asyncpg_stmt_a1b2c3__  | SELECT * FROM users WHERE id = $1        | 2026-03-11 14:22:01.123456+00
#  __asyncpg_stmt_d4e5f6__  | SELECT count(*) FROM orders WHERE ...    | 2026-03-11 14:21:58.654321+00

# This view is per-backend-process. Each PostgreSQL backend
# has its own set of prepared statements. If you connect through
# PgBouncer, you will see the statements for whichever backend
# PgBouncer assigned you — which may not be the one you expect.

# To see ALL prepared statements across ALL backends:
SELECT pid, count(*) AS stmt_count
FROM pg_stat_activity a
JOIN pg_prepared_statements ps ON true
WHERE a.pid = pg_backend_pid()
GROUP BY pid;

# In practice, the most useful diagnostic is to check whether
# the statement count is growing unboundedly, which indicates
# a cache that is never being evicted.

There are several things to look for:

Statement accumulation. If the number of prepared statements per backend grows without bound over time, your statement cache size may be too high or your eviction is not working. asyncpg's default limit of 100 statements per connection provides a natural ceiling, but bugs or unusual usage patterns can circumvent it.

Statement naming patterns. asyncpg's statements follow the pattern __asyncpg_stmt_<hash>__. If you see prepared statements with different naming conventions (e.g., plain names without the asyncpg prefix), another driver or tool is creating them. This is relevant if you are using PgBouncer 1.21+ tracking, because name collisions between different tools can cause subtle failures.

Error frequency correlation with traffic. The prepared statement error rate should correlate closely with request volume. If errors appear in bursts during traffic spikes and disappear during quiet periods, you have confirmed that backend reassignment under load is the trigger. If errors appear at a steady rate regardless of traffic, something else may be going on — perhaps a connection leak that prevents PgBouncer from reusing backends cleanly.

PgBouncer's SHOW STATS and SHOW POOLS. Connect to PgBouncer's admin console and check the ratio of sv_active (server connections in use) to sv_idle (server connections available). If sv_idle is consistently zero, PgBouncer is under backend pressure and reassigning aggressively — which maximizes the chance of prepared statement conflicts.

The complete comparison: every fix and its trade-offs

I have tested each approach. Here is the honest accounting:

ApproachQuery performanceSetup complexityPrepared stmtsPooler requirement
statement_cache_size=0Baseline (no prep stmts)LowNoAny
PgBouncer 1.21+ tracking15-30% fasterMediumYesPgBouncer >= 1.21
Switch to psycopg310-20% faster (after warmup)Medium-HighYes (adaptive)Any (with threshold=0)
Session-mode pooling15-30% fasterLowYesAny (session mode)
Gold Lapel proxy15-30% faster + query optimizationLowYesNone (built-in)

A few observations from this table:

If you control your PgBouncer version, upgrading to 1.21+ and enabling max_prepared_statements is the least disruptive fix. You change one config file, keep your asyncpg code untouched, and retain prepared statement performance. This is the approach I recommend for teams that own their infrastructure.

If you are on a managed service (Supabase, Neon, Railway, Render) where you cannot configure the pooler, statement_cache_size=0 is your immediate fix, and switching to psycopg3 with prepare_threshold=5 is your long-term option. Some managed services are adopting PgBouncer 1.21+ or equivalent statement tracking — check your provider's documentation or changelog for updates.

If you are evaluating your architecture and this is one of several pooler-related headaches, session-mode pooling eliminates the entire category of problems — at the cost of higher backend connection counts. This is the approach Gold Lapel takes internally, which is why it never encounters this issue.

If you have already disabled the statement cache and your performance is acceptable, you may reasonably decide that the complexity of the other solutions is not justified. Not every optimization is worth pursuing. If your P99 latency is within budget and your throughput headroom is comfortable, the simplest fix is often the best fix. I have opinions about leaving performance on the table, but I recognize that "works reliably and is easy to understand" has its own merit.

A note on what lies beneath the prepared statement problem

I have spent a considerable amount of time on this topic, and I want to share something that the GitHub issues and Stack Overflow answers do not typically mention.

The prepared statement trap is a symptom of a deeper architectural tension: transaction-mode pooling and stateful protocol features are fundamentally incompatible. Prepared statements are one casualty. But they are not the only one.

  • SET commands — session variables, search_path, statement_timeout, work_mem — are per-connection state that disappears when the backend is reassigned. If your application sets statement_timeout = '5s' at the start of each request, that setting may or may not be present on the backend you receive for the next request.
  • LISTEN/NOTIFY — pub/sub subscriptions are bound to the backend connection. In transaction mode, you subscribe on one backend and may receive notifications on a different one (or, more likely, miss them entirely).
  • Advisory lockspg_advisory_lock() is session-scoped. If you acquire a lock in one transaction and expect it to persist across transactions, transaction-mode pooling will release the backend (and the lock) between transactions.
  • Temporary tables — created within a transaction, but accessible across the session. In transaction mode, the temporary table may not be on the same backend for the next transaction that tries to read it.

PgBouncer 1.21's prepared statement tracking is a targeted fix for one symptom. It does not help with SET commands, notifications, advisory locks, or temporary tables. Each of those requires its own workaround — or a different pooling mode.

For most web applications — stateless request handlers, parameterized queries, no session-level configuration — transaction-mode pooling with statement_cache_size=0 or PgBouncer 1.21+ tracking is perfectly adequate. But if you find yourself working around pooler limitations in multiple places, it may be time to reconsider the pooling mode rather than patching each symptom individually. The proxy architecture you choose determines which trade-offs you accept, and that choice deserves more deliberation than it typically receives.

An honest counterpoint: when asyncpg is not the right tool

I would be remiss if I did not address this directly. asyncpg's statement cache is both its greatest performance advantage and its greatest operational liability behind a pooler. If your deployment permanently requires a transaction-mode pooler — because you are on a managed service, or because your connection count demands it — then asyncpg's design is working against your architecture.

In that situation, psycopg3 is not a compromise. It is a better architectural fit. Its adaptive prepare_threshold was designed with pooler compatibility in mind. Its performance is within 25-35% of asyncpg for raw queries, and for most web applications the driver overhead is a small fraction of total request latency.

I say this not to disparage asyncpg — it is an extraordinary piece of engineering, and Yury Selivanov's Cython implementation of the PostgreSQL binary protocol is one of the most impressive Python performance achievements I have encountered. But a tool optimized for direct connections, deployed behind a transaction-mode pooler with its primary optimization disabled, is a tool not being used as intended. If the prepared statement cache is permanently set to zero, you are paying the complexity cost of asyncpg without receiving its primary benefit.

The decision matrix is straightforward: if you can use PgBouncer 1.21+ or session-mode pooling, asyncpg with its statement cache is the fastest option. If you cannot, psycopg3 with prepare_threshold=5 provides a better balance of performance and compatibility. Both are excellent drivers. The right choice depends on the environment they will operate in, not on abstract benchmarks.

Where Gold Lapel fits in this picture

I should mention — briefly, since you came here for a fix, not a sales pitch — that Gold Lapel sidesteps this entire problem.

# Gold Lapel connection — prepared statements just work.
# No configuration. No workarounds. No performance trade-offs.

import asyncpg

# Point at Gold Lapel instead of PgBouncer:
pool = await asyncpg.create_pool(
    "postgresql://user:pass@localhost:5433/mydb",  # GL's port
    statement_cache_size=100,  # keep your statement cache
)

# Or with SQLAlchemy:
engine = create_async_engine(
    "postgresql+asyncpg://user:pass@localhost:5433/mydb",
    # No connect_args overrides needed.
    # No NullPool. No statement_cache_size=0.
)

# Gold Lapel handles session-mode pooling internally.
# Each client gets a stable backend mapping for prepared statements.
# You keep the 15-30% performance advantage. Zero configuration.

Gold Lapel is a PostgreSQL proxy that includes connection pooling as one of its capabilities (alongside auto-indexing, query rewriting, materialized view management, and result caching). Its pooling operates in session mode with intelligent backend assignment, meaning each client connection maintains a stable relationship with a backend process. Prepared statements, session variables, LISTEN/NOTIFY — all work exactly as they would with a direct PostgreSQL connection.

Because Gold Lapel understands query semantics at the SQL level — it parses every query through pg_query — it can make pooling decisions that a protocol-level pooler like PgBouncer cannot. It knows which connections have prepared statements, which have active subscriptions, and which are safely reassignable. This semantic awareness allows it to achieve connection multiplexing without sacrificing per-connection state.

If the prepared statement trap is the only problem you are solving, the fixes earlier in this guide are sufficient. If it is one of several friction points between your application and PostgreSQL, Gold Lapel may be worth a look. Change the connection string, keep your existing code, and the entire category of pooler-compatibility problems disappears.

Frequently asked questions

Terms referenced in this article

I should mention, while you are here, that the FastAPI connection lifecycle deserves its own attention. I have written a guide to the FastAPI + PostgreSQL connection pool lifecycle — covering lifespan events, graceful shutdown, and the two pool configuration mistakes I encounter in every second codebase.