← PostgreSQL Concepts

Query proxy

Allow me to introduce the member of staff who greets every query at the door. A transparent intermediary between your application and PostgreSQL — no code changes required.

Concept · March 21, 2026 · 9 min read

A query proxy is a process that sits between your application and PostgreSQL, speaking the same wire protocol on both sides. Think of it as the maître d' of your database infrastructure — every query is greeted at the door, assessed, and directed to the right place. Your application connects to the proxy as if it were the database. The proxy intercepts every query, applies whatever logic it was built for — connection pooling, caching, routing, optimization — and forwards the query to PostgreSQL. Results flow back through the proxy to the application. From the application's perspective, nothing has changed. From the database's perspective, the proxy is just another client.

What a query proxy is

PostgreSQL clients and servers communicate using a well-defined binary protocol — the PostgreSQL wire protocol. Every driver, ORM, and GUI tool speaks this protocol. A query proxy implements both sides: it accepts client connections like a PostgreSQL server and opens backend connections to PostgreSQL like a client. It is, if you will, bilingual — fluent in both the language your application speaks and the language your database expects.

This dual role is what makes proxies transparent. Your application's connection string changes from pointing at the database to pointing at the proxy. The driver still performs the same handshake, sends queries the same way, and reads results in the same format. No code changes. No driver changes. No ORM reconfiguration beyond the connection URL.

Connection strings
-- Your application connects to the proxy instead of PostgreSQL directly.
-- From the application's perspective, nothing changes.

# Direct connection
postgresql://user:pass@db-host:5432/mydb

# Through a proxy (only the host/port changes)
postgresql://user:pass@proxy-host:6432/mydb

# The proxy speaks the same wire protocol as PostgreSQL.
# Your driver, ORM, and migration tool all work unchanged.

Proxies vary considerably in what they do with the queries passing through them. Some simply manage connections. Others inspect query content, rewrite SQL, cache results, or route queries to different backends. But the mechanism is always the same: wire protocol in, wire protocol out. The sophistication is in what happens between those two moments.

Types of query proxies

Query proxies serve different purposes, and the ecosystem has specialized along several lines.

Connection poolers

PgBouncer is the most widely deployed PostgreSQL proxy, and deservedly so. It accepts thousands of application connections and multiplexes them onto a small number of PostgreSQL backend connections. This solves the fundamental problem that each PostgreSQL connection is a forked process consuming 5-10 MB of memory. PgBouncer adds almost no latency and is operationally simple — a single-threaded, event-driven process with a small configuration file. It does one thing, and it does it well.

pgbouncer.ini
# PgBouncer — connection pooling proxy
# pgbouncer.ini
[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer]
listen_port = 6432
pool_mode = transaction
default_pool_size = 20
max_client_conn = 1000

Caching proxies

ReadySet is a caching proxy that maintains incrementally-updated materialized views in memory. When a query matches a cached view, ReadySet returns the result without hitting PostgreSQL. When the underlying data changes, the cache updates automatically. This is transparent query caching — the application issues normal SQL and gets faster results without knowing a cache is involved.

Read/write routers

Pgpool-II parses each query to determine whether it is a read or a write and routes accordingly — reads to replicas, writes to the primary. This spreads read load across multiple servers without requiring the application to manage multiple connection strings or decide which queries go where.

pgpool.conf
# Pgpool-II — read/write routing proxy
# pgpool.conf
backend_hostname0 = 'primary-host'
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = 'ALLOW_TO_FAILOVER'

backend_hostname1 = 'replica-host'
backend_port1 = 5432
backend_weight1 = 2
backend_flag1 = 'ALLOW_TO_FAILOVER'

# SELECT queries go to the replica; writes go to primary
load_balance_mode = on

Optimization proxies

Gold Lapel is an optimization proxy — and I should note, the one this household is built around. It intercepts queries, analyzes workload patterns over time, and applies automatic optimizations: creating materialized views, identifying missing indexes, rewriting queries to use faster execution paths. It also handles connection pooling, since it already sits in the connection path. More on this below.

How a proxy works

If you'll permit me a brief look behind the curtain: at a technical level, a query proxy manages two sets of connections and translates between them.

Query lifecycle through a proxy
-- What happens when a query passes through a proxy:

-- 1. Application opens a connection to the proxy
--    (TCP handshake, SSL, authentication — same as PostgreSQL)

-- 2. Application sends a query
SELECT u.name, count(o.id) AS order_count
FROM users u
JOIN orders o ON o.user_id = u.id
WHERE u.created_at > now() - interval '30 days'
GROUP BY u.name;

-- 3. Proxy intercepts the query
--    - Connection pooler: assigns a backend connection, forwards the query
--    - Caching proxy: checks cache, returns cached result or forwards
--    - Optimization proxy: analyzes the query, may rewrite or route it
--    - Read/write router: parses the query, routes to primary or replica

-- 4. PostgreSQL executes the query and returns results

-- 5. Proxy forwards results to the application
--    (may also record statistics, update caches, etc.)

Wire protocol compatibility

The proxy must implement the PostgreSQL wire protocol faithfully — startup messages, authentication flows, query and parse messages, row descriptions, data rows, error responses, and all the nuances of extended query protocol. Any deviation and drivers break. This is not a place for approximation. It is why most proxies are purpose-built for PostgreSQL rather than generic TCP proxies: they need to understand the protocol deeply enough to intercept at the query level.

Connection multiplexing

Most proxies maintain a pool of backend connections to PostgreSQL and assign them to client requests as needed. A proxy handling 500 application connections might maintain only 20 backend connections. The savings are considerable: PostgreSQL performs better with fewer, busier connections than with many idle ones — a household with twenty attentive staff outperforms one with five hundred who mostly stand around. The proxy handles the fan-in, queueing client requests when all backend connections are active.

Query interception

The level of query interception varies by proxy type. A connection pooler like PgBouncer barely looks at the query content — it forwards bytes efficiently. A routing proxy like Pgpool-II must parse enough of each query to classify it as read or write. An optimization proxy like Gold Lapel fully parses and analyzes the query to determine whether an optimized execution path exists. The deeper the interception, the more capability the proxy offers and the more CPU it consumes per query.

Benefits of a proxy

Running a proxy in front of PostgreSQL provides advantages that are difficult or impossible to achieve at the application layer alone. A good proxy earns its position.

No application code changes

The proxy is transparent. You change a connection string. Your application, your ORM, your migration tool, your admin scripts — they all continue to work, none the wiser. This is the fundamental value proposition of the proxy pattern: you add capabilities to the database layer without touching the application layer.

Centralized policy

When all queries flow through a single point, you can enforce policies consistently. Connection limits, query timeouts, access controls, and routing rules apply to every client regardless of which application, language, or framework originated the query. This is particularly valuable in environments with multiple services connecting to the same database.

Connection management

PostgreSQL's process-per-connection model scales poorly. A proxy absorbs the connection fan-out from your application tier and presents a controlled, smaller number of connections to PostgreSQL. This is the most commonly cited reason to run a proxy, and for good reason — it often provides the single largest improvement in connection-heavy workloads.

Observability

A proxy sees every query. Every single one. This makes it a natural point to collect query statistics, log slow queries, measure latency distributions, and track workload changes over time. Some proxies surface this data through dashboards or metrics endpoints, providing visibility that would otherwise require installing extensions like pg_stat_statements on the server. You cannot improve what you cannot observe.

Trade-offs

I should be direct about this: proxies are not free. They add value, but they also add cost and operational surface area. Any honest account of the proxy pattern must include what it costs you.

Added latency

Every query now takes an additional network hop: application to proxy, proxy to PostgreSQL, and the return path. For a well-deployed proxy on the same network, this adds under 1 ms per round trip. For proxies that parse or analyze queries, there is additional CPU overhead per query. In most architectures, the latency added by a proxy is smaller than the latency saved by connection pooling alone — but it is not zero, and anyone who tells you otherwise is selling something. Measure it.

Operational complexity

A proxy is another process to deploy, configure, monitor, and upgrade. It needs health checks, log collection, and alerting. It has its own failure modes — if the proxy goes down, every application behind it loses database access. This is not a hypothetical concern; it is the central risk of the architecture. High-availability setups require running multiple proxy instances behind a load balancer, which adds further complexity. Every layer must earn its place in the household.

Another component to monitor

The proxy itself consumes CPU and memory, and it can become a bottleneck. If the proxy is CPU-bound (from query parsing, TLS termination, or extensive analysis), it limits the throughput of every client behind it. Monitoring the proxy's own resource usage and latency contribution is essential to avoid it becoming an invisible bottleneck.

Feature compatibility

Not every proxy supports every PostgreSQL feature. Session-level state (prepared statements, advisory locks, SET commands, LISTEN/NOTIFY) can be problematic when the proxy multiplexes connections. Proxies are getting better at this — PgBouncer 1.21+ handles prepared statements transparently, for example — but it remains an area that requires testing with your specific workload.

How Gold Lapel relates

Gold Lapel is a query proxy. It sits between your application and PostgreSQL, speaks the wire protocol, and intercepts every query. This is not a peripheral feature — it is the entire premise. The proxy is not something Gold Lapel uses; the proxy is what Gold Lapel is.

Where Gold Lapel differs from other proxies is in what it does with the queries it intercepts. Rather than just pooling connections or routing reads and writes, Gold Lapel analyzes query patterns over time and applies optimizations automatically. It creates materialized views for expensive aggregations, identifies missing indexes, and rewrites queries to use faster execution paths when they exist. The proxy position is what makes this possible — like a maître d' who, by greeting every guest, learns the rhythms of the dining room well enough to anticipate what the kitchen needs before it is asked. Gold Lapel sees every query before it reaches PostgreSQL, which gives it the information it needs to optimize without requiring changes to your application code.

Because Gold Lapel already sits in the connection path, it includes connection multiplexing as a built-in capability. Application connections terminate at Gold Lapel, which maintains its own pool of backend connections to PostgreSQL. If you are currently running PgBouncer solely for connection pooling, Gold Lapel can replace that layer. If you are running PgBouncer for other reasons — specific pooling modes, established operational familiarity — the two coexist comfortably. Gold Lapel connects to PgBouncer, which connects to PostgreSQL.

Deployment is a connection string change. Point your application at Gold Lapel instead of directly at PostgreSQL. No code changes, no ORM plugins, no query annotations. The same transparency that makes PgBouncer easy to adopt applies here — though I would hope we do rather more with the opportunity.

Frequently asked questions