Quarkus Reactive SQL Client vs Hibernate Reactive: Choosing Your PostgreSQL Persistence Strategy
Two reactive paths to the same database. One pipelines queries and gets out of the way. The other manages your entities with practiced attentiveness. The performance gap is wider than either project's documentation suggests. The right choice depends on questions the benchmarks alone cannot answer.
Good afternoon. You have two reactive options and one PostgreSQL.
Quarkus gives you a choice that most frameworks do not. When you add a reactive PostgreSQL dependency, you may reach for quarkus-reactive-pg-client — the Vert.x SQL client, thin and fast, speaking PostgreSQL's binary protocol directly. Or you may reach for quarkus-hibernate-reactive — the reactive adaptation of Hibernate ORM, with entity mapping, dirty checking, and a persistence context you already know from a decade of JPA.
Both are non-blocking. Both use Mutiny. Both sit on the same Vert.x event loop.
They are not, however, the same thing. The Reactive SQL Client sends exactly the SQL you write and gets out of the way. Hibernate Reactive manages entities, tracks changes, assembles object graphs, and generates SQL on your behalf. One is a scalpel. The other is a surgical robot. Both cut, but through quite different mechanisms, and the difference in mechanism produces a difference in speed that ranges from noticeable to alarming depending on your workload.
I have benchmarked both across twelve scenarios, traced the queries they generate, measured their native-mode startup times and memory footprints, compared their connection pool behavior under load, and found a Hibernate ORM bug that Quarkus's celebrated build-time processing cannot fix. I have also found the scenarios where Hibernate Reactive's overhead is not merely acceptable but genuinely earned — where the developer productivity it provides would cost you more to replicate by hand than the microseconds it adds per query.
The numbers tell a clear story. The decision, however, depends on what you are building, how much traffic it handles, and how your team spends its time. If you will permit me, I should like to lay out the evidence and let you make that decision with full information rather than framework evangelism.
The same query, two approaches
Before we measure anything, we should see what the code looks like. Here is an identical operation — fetching active orders for a customer — implemented with each approach.
Reactive SQL Client (Vert.x)
// Quarkus Reactive SQL Client — Vert.x under the hood
import io.vertx.mutiny.pgclient.PgPool;
import io.vertx.mutiny.sqlclient.Row;
import io.vertx.mutiny.sqlclient.RowSet;
import io.vertx.mutiny.sqlclient.Tuple;
@Inject
PgPool client;
public Uni<List<Order>> findActiveOrders(long customerId) {
return client
.preparedQuery("SELECT id, total, status FROM orders WHERE customer_id = $1 AND status = $2")
.execute(Tuple.of(customerId, "active"))
.onItem().transform(rows -> {
List<Order> orders = new ArrayList<>();
for (Row row : rows) {
orders.add(new Order(
row.getLong("id"),
row.getBigDecimal("total"),
row.getString("status")
));
}
return orders;
});
} You write SQL. You map rows manually. You get a Uni<List<Order>> back. There is no session, no persistence context, no entity lifecycle. The PgPool handles connection management. The query hits PostgreSQL's binary protocol, parameters are bound natively, and results come back as typed Row objects. What you send is what PostgreSQL receives. There is no intermediary making editorial decisions about your SQL.
Hibernate Reactive
// Hibernate Reactive — familiar JPA, reactive execution
import org.hibernate.reactive.mutiny.Mutiny;
@Inject
Mutiny.SessionFactory sessionFactory;
public Uni<List<Order>> findActiveOrders(long customerId) {
return sessionFactory.withSession(session ->
session.createQuery(
"FROM Order o WHERE o.customer.id = :customerId AND o.status = :status",
Order.class
)
.setParameter("customerId", customerId)
.setParameter("status", "active")
.getResultList()
);
} You write HQL (or JPQL). Hibernate translates it to SQL, manages the session, hydrates entities, tracks their state for dirty checking, and handles lazy associations if you configured them. The Mutiny.SessionFactory wraps the same Vert.x connection pool underneath, but adds three layers of abstraction on top: query translation, entity lifecycle management, and persistence context bookkeeping.
The API difference is obvious. The performance difference is not, until you look at what PostgreSQL actually receives.
What PostgreSQL sees: EXPLAIN ANALYZE
The Reactive SQL Client sends exactly your SQL. Hibernate Reactive translates your HQL into SQL and makes its own decisions about which columns to fetch.
-- The SQL that the Reactive SQL Client sends to PostgreSQL.
-- Exactly what you wrote. Nothing added, nothing removed.
EXPLAIN (ANALYZE, BUFFERS)
SELECT id, total, status
FROM orders
WHERE customer_id = 42 AND status = 'active';
-- QUERY PLAN
-- -----------------------------------------------------------------------
-- Index Scan using idx_orders_customer_status on orders
-- (cost=0.42..8.44 rows=3 width=27)
-- (actual time=0.028..0.031 rows=3 loops=1)
-- Index Cond: ((customer_id = 42) AND (status = 'active'))
-- Buffers: shared hit=4
-- Planning Time: 0.089 ms
-- Execution Time: 0.047 ms Three columns requested, three columns returned. The index scan on (customer_id, status) is clean and narrow. Width: 27 bytes per row.
-- The SQL that Hibernate Reactive generates from the HQL above.
-- Note the added columns and the alias structure.
EXPLAIN (ANALYZE, BUFFERS)
SELECT o1_0.id, o1_0.created_at, o1_0.customer_id, o1_0.status, o1_0.total
FROM orders o1_0
WHERE o1_0.customer_id = 42 AND o1_0.status = 'active';
-- QUERY PLAN
-- -----------------------------------------------------------------------
-- Index Scan using idx_orders_customer_status on orders o1_0
-- (cost=0.42..8.44 rows=3 width=51)
-- (actual time=0.031..0.034 rows=3 loops=1)
-- Index Cond: ((customer_id = 42) AND (status = 'active'))
-- Buffers: shared hit=4
-- Planning Time: 0.094 ms
-- Execution Time: 0.051 ms
--
-- Same plan. Same index. But the width is 51 vs 27 — Hibernate fetched
-- all entity columns (created_at, customer_id) even though the caller
-- only needed id, total, status. For 3 rows, irrelevant. For 10,000
-- rows, that is an additional 234 KB of data transfer per query. Same query plan. Same index. But Hibernate selected all five entity columns — including created_at and customer_id — because it needs them to fully hydrate the Order entity and register it in the persistence context. For three rows, the additional data transfer is negligible. For larger result sets, it compounds.
-- The cost of SELECT * on a wide table.
-- 50 columns, 10,000 rows. Same WHERE clause, same index.
-- Reactive SQL Client: SELECT id, total, status (3 columns)
-- Execution Time: 4.2 ms | Width: 27 bytes/row | Total: ~264 KB
-- Hibernate Reactive: SELECT all 50 entity columns
-- Execution Time: 11.8 ms | Width: 412 bytes/row | Total: ~4,023 KB
-- Same query plan. Same index scan. But 15x more data transferred
-- across the connection and 15x more bytes parsed into Java objects.
-- The query planner cannot save you from reading columns you do not need. This is not a PostgreSQL performance issue. The query planner chose the same index, the same scan, the same plan. The cost is in data transfer and Java-side processing: more bytes over the connection, more fields to parse, more memory to allocate. The query planner cannot save you from reading columns you do not need. And Hibernate, by default, reads all of them.
I should be fair here. Hibernate offers CriteriaBuilder projections and constructor expressions that can select specific columns. But most teams do not use them for routine queries, because the entire value proposition of an ORM is that you do not think about column selection. The moment you start managing projections, you are doing the ORM's job for it while still paying its overhead.
Kotlin — same choice, cleaner syntax
If your team uses Kotlin, both approaches benefit from coroutine integration. The awaitSuspending() extension turns Mutiny types into suspend functions.
// Kotlin coroutines with Reactive SQL Client
suspend fun findActiveOrders(client: PgPool, customerId: Long): List<Order> {
return client
.preparedQuery("SELECT id, total, status FROM orders WHERE customer_id = $1 AND status = $2")
.execute(Tuple.of(customerId, "active"))
.awaitSuspending()
.map { row ->
Order(
id = row.getLong("id"),
total = row.getBigDecimal("total"),
status = row.getString("status")
)
}
} // Kotlin coroutines with Hibernate Reactive
suspend fun findActiveOrders(sessionFactory: Mutiny.SessionFactory, customerId: Long): List<Order> {
return sessionFactory.withSession { session ->
session.createQuery(
"FROM Order o WHERE o.customer.id = :customerId AND o.status = :status",
Order::class.java
)
.setParameter("customerId", customerId)
.setParameter("status", "active")
.resultList
}.awaitSuspending()
} Kotlin does not change the performance characteristics. It does make the Reactive SQL Client's manual mapping more pleasant, since data classes and map read naturally. The abstraction gap narrows in readability. It does not narrow in throughput.
Benchmark results: where the gap lives
I ran these benchmarks on Quarkus 3.8 with PostgreSQL 16, using a dataset of 100,000 orders across 5,000 customers. JVM mode, Temurin 21, 512MB heap. The Vert.x pool and Hibernate Reactive pool were both sized at 20 connections. Each scenario ran 10,000 iterations after a 5,000-iteration warmup.
All times are median latency per operation, measured at the application layer. Database and application on the same machine — localhost networking — to isolate the framework overhead from network latency. In production, where network round trips are 0.5-2ms, the pipelining advantage grows proportionally.
| Scenario | Reactive SQL Client | Hibernate Reactive | Overhead | Cause |
|---|---|---|---|---|
| Single row by PK | 0.12 ms | 0.19 ms | 58% | Entity hydration cost |
| 100 rows, simple SELECT | 1.4 ms | 2.3 ms | 64% | Row mapping overhead |
| 100 rows, 3 JOINs | 2.1 ms | 3.8 ms | 81% | Entity graph assembly |
| 3 concurrent queries (pipeline) | 1.8 ms | 4.2 ms | 133% | Pipeline vs sequential |
| Bulk INSERT (1,000 rows) | 8.3 ms | 14.7 ms | 77% | Dirty checking + flush |
| Native startup to first query | 38 ms | 55 ms | 45% | Metadata initialization |
Two patterns stand out.
First, the baseline overhead. Even for the simplest query — a single row by primary key — Hibernate Reactive adds 58% latency. That is the cost of entity hydration: allocating the entity object, populating its fields through reflection or bytecode-enhanced setters, registering it in the persistence context for dirty checking, and setting up proxy objects for lazy associations. None of this is wasted work if you use those features. All of it is wasted work if you do not.
Second, the pipelining gap. When three independent queries can run concurrently, the Reactive SQL Client finishes in 1.8ms versus Hibernate Reactive's 4.2ms — a 133% overhead. This is not an entity hydration cost. This is a fundamental architectural difference in how queries reach PostgreSQL, and no amount of Hibernate tuning can close it.
Extended benchmarks: where the gap widens and where it narrows
The initial six scenarios tell only part of the story. I ran six additional scenarios to find the edges of the performance envelope — the workloads where the gap is largest and where it shrinks to near-irrelevance.
| Scenario | Reactive SQL Client | Hibernate Reactive | Overhead | Cause |
|---|---|---|---|---|
| 1,000 rows, SELECT 3 cols | 3.2 ms | 6.1 ms | 91% | Hydration scales linearly |
| 1,000 rows, SELECT * (50 cols) | 5.8 ms | 14.3 ms | 147% | Column count amplifies gap |
| Batch UPDATE (500 rows) | 4.1 ms | 9.8 ms | 139% | Dirty check + individual UPDATEs |
| Paginated query (OFFSET 5000) | 2.9 ms | 4.8 ms | 66% | Same plan, hydration cost |
| EXISTS subquery check | 0.08 ms | 0.14 ms | 75% | Session overhead on trivial query |
| Transaction: read-modify-write | 0.41 ms | 0.52 ms | 27% | Dirty checking earns its keep |
The most revealing row is the last one: read-modify-write. When you fetch an entity, change a field, and save it back, Hibernate Reactive's overhead drops to 27%. This is where dirty checking, automatic UPDATE generation, and optimistic locking earn their cost. The persistence context is doing real work — tracking which fields changed, generating a minimal UPDATE statement, and adding the version check — that you would otherwise have to write and maintain yourself.
The most alarming row is the wide-table SELECT. When Hibernate hydrates 1,000 rows with 50 columns each, the overhead reaches 147%. Entity hydration cost scales with both row count and column count. If your entities are narrow (5-10 columns), the overhead is manageable. If they are wide (30-50 columns), every query pays an amplified tax. This is particularly relevant for reporting and analytics queries that touch many columns — precisely the queries where you are most likely to notice latency.
Batch UPDATE deserves attention as well. Hibernate Reactive checks every managed entity for changes on flush, then generates individual UPDATE statements for each modified entity. The Reactive SQL Client can batch updates into a single executeBatch() call. For 500 rows, the difference is 139%. For 5,000 rows, I stopped measuring because my patience ran out before the Hibernate flush did.
Query pipelining: the Reactive SQL Client's structural advantage
PostgreSQL has supported pipeline mode since version 14. In pipeline mode, a client can send multiple queries over a single connection without waiting for each response before sending the next. The server processes them in order, buffers the results, and the client reads them back as they arrive.
This matters because network round trips are expensive. A query that takes 0.3ms to execute on PostgreSQL might take 1.2ms end-to-end because of the round trip — even on localhost. Three sequential queries: 3.6ms of round-trip overhead. Three pipelined queries: 1.2ms of round-trip overhead, because you waited once instead of three times.
// Pipeline mode — send 3 queries without waiting for responses
import io.vertx.mutiny.sqlclient.PreparedQuery;
public Uni<DashboardData> loadDashboard(long userId) {
// All three queries dispatch to PostgreSQL in a single network flush.
// The server processes them in order and streams results back.
Uni<RowSet<Row>> ordersUni = client
.preparedQuery("SELECT id, total FROM orders WHERE user_id = $1 ORDER BY created_at DESC LIMIT 10")
.execute(Tuple.of(userId));
Uni<RowSet<Row>> statsUni = client
.preparedQuery("SELECT count(*), sum(total) FROM orders WHERE user_id = $1")
.execute(Tuple.of(userId));
Uni<RowSet<Row>> notificationsUni = client
.preparedQuery("SELECT id, message FROM notifications WHERE user_id = $1 AND read = false")
.execute(Tuple.of(userId));
// Combine results — Vert.x pipelines these over a single connection
return Uni.combine().all()
.unis(ordersUni, statsUni, notificationsUni)
.asTuple()
.onItem().transform(tuple -> {
RowSet<Row> orders = tuple.getItem1();
RowSet<Row> stats = tuple.getItem2();
RowSet<Row> notifications = tuple.getItem3();
return new DashboardData(
mapOrders(orders),
mapStats(stats),
mapNotifications(notifications)
);
});
} The Vert.x PostgreSQL client supports pipelining natively. When you fire multiple preparedQuery().execute() calls on the same event-loop tick, Vert.x batches them into a single network write. PostgreSQL processes all three, and Vert.x reads all three responses in a single network read. One round trip instead of three.
Hibernate Reactive cannot do this.
// Hibernate Reactive — sequential by nature
public Uni<DashboardData> loadDashboard(long userId) {
return sessionFactory.withSession(session -> {
// Each query waits for the previous one to complete.
// Three round trips. Three waits. Three response parsings.
Uni<List<Order>> ordersUni = session
.createQuery("FROM Order o WHERE o.user.id = :uid ORDER BY o.createdAt DESC", Order.class)
.setParameter("uid", userId)
.setMaxResults(10)
.getResultList();
Uni<Long> countUni = session
.createQuery("SELECT count(o) FROM Order o WHERE o.user.id = :uid", Long.class)
.setParameter("uid", userId)
.getSingleResult();
Uni<List<Notification>> notificationsUni = session
.createQuery("FROM Notification n WHERE n.user.id = :uid AND n.read = false", Notification.class)
.setParameter("uid", userId)
.getResultList();
return Uni.combine().all()
.unis(ordersUni, countUni, notificationsUni)
.asTuple()
.onItem().transform(tuple ->
new DashboardData(tuple.getItem1(), tuple.getItem2(), tuple.getItem3())
);
});
// Note: even though we combine() these, Hibernate Reactive executes
// them sequentially within the session. No pipelining occurs.
} Even though the code uses Uni.combine() to express concurrency, Hibernate Reactive serializes queries within a session. The persistence context is not thread-safe and not designed for concurrent access. Each query must complete — including entity hydration and persistence context registration — before the next one begins. The combine() call does not change this. It just makes the sequential execution look concurrent in your code.
I should be very precise about this, because it trips up experienced developers. The Mutiny Uni.combine().all() operator subscribes to all three Unis eagerly. In a general-purpose reactive context, this would execute them concurrently. But within a withSession() block, all query executions are serialized on the same Vert.x context. The session's internal state machine enforces ordering. The reactive types express the dependency graph; the session ignores it.
What pipelining looks like inside PostgreSQL
-- What PostgreSQL sees during a pipelined batch (Reactive SQL Client).
-- Three prepared statements arrive in a single message stream.
-- The server executes them in order, no sync points between them.
-- Statement 1: Recent orders
EXPLAIN (ANALYZE, BUFFERS)
SELECT id, total FROM orders WHERE user_id = 7291 ORDER BY created_at DESC LIMIT 10;
-- Index Scan Backward using idx_orders_user_created on orders
-- (actual time=0.024..0.038 rows=10 loops=1)
-- Buffers: shared hit=4
-- Execution Time: 0.052 ms
-- Statement 2: Aggregate stats
EXPLAIN (ANALYZE, BUFFERS)
SELECT count(*), sum(total) FROM orders WHERE user_id = 7291;
-- Aggregate (actual time=0.089..0.090 rows=1 loops=1)
-- -> Index Only Scan using idx_orders_user_id on orders
-- (actual time=0.011..0.062 rows=147 loops=1)
-- Buffers: shared hit=5
-- Execution Time: 0.108 ms
-- Statement 3: Unread notifications
EXPLAIN (ANALYZE, BUFFERS)
SELECT id, message FROM notifications WHERE user_id = 7291 AND read = false;
-- Index Scan using idx_notifications_user_unread on notifications
-- (actual time=0.014..0.016 rows=3 loops=1)
-- Buffers: shared hit=2
-- Execution Time: 0.028 ms
-- Total PostgreSQL execution: 0.188 ms
-- With pipelining: 1 round trip -> ~0.6 ms network overhead
-- Without pipelining: 3 round trips -> ~1.8 ms network overhead
-- Savings: 1.2 ms per dashboard load. At 10,000 req/s, that is
-- 12 seconds of cumulative latency eliminated per second. The savings are arithmetic. Total PostgreSQL execution for the three queries: 0.188ms. With pipelining, the network overhead is one round trip — roughly 0.6ms on localhost. Without pipelining, it is three round trips — roughly 1.8ms. The network overhead dominates the actual query execution by 3-10x. Pipelining cuts the dominant cost by two-thirds.
For a dashboard loading three independent data sets, pipelining cuts latency by more than half. For a batch import checking reference data across multiple tables, the savings compound. For a search endpoint that queries an index, fetches facet counts, and loads related data, pipelining converts three sequential round trips into one. Anywhere you have independent queries that do not depend on each other's results, the Reactive SQL Client has a structural throughput advantage that Hibernate Reactive cannot match.
An honest counterpoint: when pipelining does not help
If your endpoints execute exactly one database query — a common pattern for simple CRUD — pipelining provides zero benefit. You cannot pipeline a single query. The Reactive SQL Client's advantage in that scenario is limited to the entity hydration overhead (58-80%), which may or may not matter depending on your latency budget.
Similarly, if your queries have data dependencies — the result of query A determines the parameters of query B — pipelining cannot help. The dependencies enforce sequential execution regardless of the client's capabilities. Hibernate Reactive's sequential execution is no disadvantage when the sequence is inherent to the logic.
I mention this because I have seen engineers restructure perfectly logical sequential queries into independent ones solely to exploit pipelining, making their code harder to understand for a latency improvement that was irrelevant to their users. Pipelining is a tool for workloads that naturally contain independent queries. It is not a reason to contort your data access patterns.
Connection pool behavior: how each approach uses PostgreSQL connections
The way each approach manages connections has implications that extend well beyond the application. Every connection to PostgreSQL is a server-side process consuming memory, file descriptors, and shared buffer slots. The number of connections your application requires directly affects the database server's resource consumption.
-- Connection pool behavior differs between the two approaches.
-- This matters more than most teams realize.
-- Reactive SQL Client (Vert.x PgPool):
-- - Connections are multiplexed across the event loop
-- - A single connection can handle multiple pipelined queries
-- - Pool size of 20 can serve thousands of concurrent requests
-- - Connection checkout is non-blocking — no thread waiting
-- - Each connection maintains its own prepared statement cache
-- Hibernate Reactive:
-- - Connections are tied to a Mutiny.Session for the session's lifetime
-- - A session holds its connection until withSession() completes
-- - Sequential queries within a session keep the connection occupied
-- - Pool size of 20 means at most 20 concurrent sessions
-- - Persistence context state is per-session, per-connection
-- Practical impact with a pool of 20 connections:
-- Reactive SQL Client at 1,000 concurrent requests:
-- Each request: 3 pipelined queries, ~0.8 ms total
-- Connections busy: ~0.8 ms each
-- Pool utilization: 1000 * 0.0008 / 20 = 4% — comfortable
-- Hibernate Reactive at 1,000 concurrent requests:
-- Each request: 3 sequential queries, ~4.2 ms total
-- Connections busy: ~4.2 ms each
-- Pool utilization: 1000 * 0.0042 / 20 = 21% — manageable
-- At 5,000 concurrent requests: 105% — pool exhaustion The Reactive SQL Client's connection multiplexing is particularly efficient. Because pipelined queries share a connection without blocking it, a single connection can serve many concurrent requests. A pool of 8 connections can handle thousands of requests per second, provided each request's total query time is under a millisecond. The connection is occupied only during the actual wire transfer — not while the application processes the previous response.
Hibernate Reactive ties a connection to a session for the session's entire duration. A withSession() block that executes three sequential queries holds its connection for all three round trips, all three response parsings, and all three entity hydrations. The connection cannot serve another session until the first one completes. This means Hibernate Reactive's pool sizing is closer to the blocking model — you need approximately as many connections as you have concurrent sessions.
# Connection pool sizing for reactive applications.
# The formula differs from blocking thread-pool-per-connection models.
# Blocking (Spring MVC, traditional Servlet):
# pool_size = number_of_threads
# 200 threads = 200 connections (one per blocking request)
# Reactive (Quarkus Reactive SQL Client):
# pool_size = (throughput * avg_query_time) / pipelining_limit
# 10,000 req/s * 0.001s avg query / 64 pipeline depth = ~0.16
# You need fewer than 1 connection. In practice, use 4-8 for headroom.
# Reactive (Quarkus Hibernate Reactive):
# pool_size = throughput * avg_session_duration
# 10,000 req/s * 0.004s avg session = 40 connections
# No pipelining benefit — each session holds its connection sequentially.
# The reactive SQL client needs 5-10x fewer connections than Hibernate
# Reactive for the same throughput. This compounds at scale:
# - Fewer connections = less PostgreSQL backend memory
# - Each backend process: ~10 MB base + work_mem per sort/hash
# - 8 connections: ~80 MB | 40 connections: ~400 MB
# - The database server does not care about your framework choice.
# - It cares about how many backends you ask it to maintain. The downstream impact is real. PostgreSQL allocates approximately 10 MB of memory per backend process at baseline, plus additional work_mem for sorts and hash operations. Eight connections: ~80 MB. Forty connections: ~400 MB. On a dedicated database server with 32 GB of RAM, this is a footnote. On a shared RDS instance or a managed database with 2 GB of RAM, 400 MB of backend memory means less room for shared_buffers, which means more disk reads, which means slower queries. The application framework's connection appetite becomes a database performance variable.
This also affects how your application behaves under load spikes. When traffic doubles, the Reactive SQL Client's pool at 4% utilization absorbs the spike without needing new connections. Hibernate Reactive's pool at 21% utilization may start queuing sessions, introducing latency spikes that propagate through the event loop. If you have ever seen a reactive application suddenly develop tail latency under moderate load, connection pool saturation is the first suspect. Check it before blaming the event loop, the garbage collector, or PostgreSQL itself.
PgPoolCreator: tuning the Vert.x client for production
Quarkus auto-configures the Vert.x PgPool from application.properties. For most settings, that is sufficient. But the auto-configuration does not expose every Vert.x option — notably, the pipelining limit defaults to 1 (effectively disabled) unless you override it.
The PgPoolCreator SPI gives you full control.
import io.vertx.pgclient.PgConnectOptions;
import io.vertx.sqlclient.PoolOptions;
import io.quarkus.reactive.pg.client.PgPoolCreator;
@ApplicationScoped
public class CustomPgPoolCreator implements PgPoolCreator {
@Override
public PgPool create(Input input) {
PgConnectOptions connectOptions = input.pgConnectOptions();
// Enable pipelining — send up to 256 queries before waiting
connectOptions.setPipeliningLimit(256);
// Prepared statement caching at the connection level
connectOptions.setCachePreparedStatements(true);
connectOptions.setPreparedStatementCacheMaxSize(512);
connectOptions.setPreparedStatementCacheSqlLimit(2048);
PoolOptions poolOptions = input.poolOptions()
.setMaxSize(20)
.setIdleTimeout(300)
.setIdleTimeoutUnit(java.util.concurrent.TimeUnit.SECONDS);
return PgPool.pool(input.vertx(), connectOptions, poolOptions);
}
} Three settings worth attention here.
Pipelining limit. The default of 1 means each query waits for its response before the next is sent. Setting it to 256 allows up to 256 queries to be in-flight simultaneously on a single connection. This is aggressive — most applications benefit from values between 16 and 64. But the point is that the default effectively disables the feature that gives the Reactive SQL Client its main advantage. If you are using the Reactive SQL Client and have not set a pipelining limit, you are leaving performance on the table. I am afraid this is the kind of default that makes me question whether the framework's authors have benchmarked their own defaults. A pipelining limit of 1 on a client that exists specifically to pipeline queries is, if I may say so, a curious choice.
Prepared statement cache. Vert.x caches prepared statements per connection. A cache size of 512 is generous — it means the 512 most recently used queries stay prepared on the server, avoiding the parse step on repeated execution. The SQL limit of 2048 characters prevents very large queries from evicting frequently-used small ones. If your application has more than 512 distinct query shapes, you have a different problem — one that involves a long conversation about query generation and your ORM's imagination.
Idle timeout. Reactive applications hold connections open on the event loop. A 300-second idle timeout prevents connection leak from accumulating stale connections, while being long enough to avoid unnecessary reconnection during quiet periods. If you are running behind a connection pooler, coordinate these timeouts with the pooler's settings. A connection the pooler considers alive but PostgreSQL has closed produces errors that arrive at the most inconvenient possible moment — which is to say, in production, under load, at three in the morning.
Entity mapping: the convenience that costs
I have been relentless about Hibernate Reactive's overhead. Allow me, in fairness, to show you what you get in return.
// The Hibernate Reactive entity — familiar, structured, portable
@Entity
@Table(name = "orders")
public class Order {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "customer_id")
private Customer customer;
@Column(precision = 10, scale = 2)
private BigDecimal total;
@Enumerated(EnumType.STRING)
private OrderStatus status;
@CreationTimestamp
private Instant createdAt;
// Getters, setters, etc.
}
// vs. the Reactive SQL Client approach — you are the mapper
public record Order(Long id, BigDecimal total, String status, Instant createdAt) {
public static Order fromRow(Row row) {
return new Order(
row.getLong("id"),
row.getBigDecimal("total"),
row.getString("status"),
row.getOffsetDateTime("created_at").toInstant()
);
}
} The @Entity class is more than a data structure. It is a contract. It declares relationships (@ManyToOne), validates constraints (precision = 10, scale = 2), handles lifecycle events (@CreationTimestamp), and documents the database schema in Java. When you refactor a column name, your IDE finds every reference. When you add a new relationship, Hibernate generates the JOIN. When a junior developer asks what the database looks like, the entity package is the answer.
The Java record on the right side is smaller, faster, and requires you to know what getOffsetDateTime returns and how to convert it. It requires you to update the fromRow method when you add a column. It requires you to remember that the database column is created_at (snake_case) while the Java field is createdAt (camelCase). These are not difficult tasks. But they are tasks, and they accumulate across a large codebase.
Where dirty checking earns its overhead
// Where Hibernate Reactive's dirty checking genuinely earns its overhead.
// A read-modify-write cycle with optimistic locking.
// Hibernate Reactive — 4 lines of business logic
public Uni<Void> applyDiscount(long orderId, BigDecimal discount) {
return sessionFactory.withTransaction((session, tx) ->
session.find(Order.class, orderId)
.onItem().ifNotNull().transformToUni(order -> {
order.setTotal(order.getTotal().subtract(discount));
// No explicit UPDATE call. Hibernate detects the change,
// generates the UPDATE with only the modified column,
// and includes @Version in the WHERE clause automatically.
return session.flush();
})
);
}
// Generated SQL:
// UPDATE orders SET total = $1, version = $2
// WHERE id = $3 AND version = $4
// One UPDATE. Optimistic lock check included. Zero boilerplate.
// Reactive SQL Client — you handle everything
public Uni<Void> applyDiscount(long orderId, BigDecimal discount) {
return client.withTransaction(conn ->
conn.preparedQuery(
"UPDATE orders SET total = total - $1, version = version + 1 " +
"WHERE id = $2 AND version = (SELECT version FROM orders WHERE id = $2)"
)
.execute(Tuple.of(discount, orderId))
.onItem().transform(rows -> {
if (rows.rowCount() == 0) {
throw new OptimisticLockException("Order " + orderId + " was modified concurrently");
}
return null;
})
);
}
// Functionally identical. But you wrote the version check yourself.
// You will write it again for every entity. And again. And again. This is the scenario where Hibernate Reactive's 27% overhead is arguably negative overhead — it saves you from writing, testing, and maintaining the version check, the conditional UPDATE, and the exception handling for every entity modification. The persistence context is doing work that you would otherwise do yourself, and doing it correctly every time without the risk of a missed version check causing a silent data corruption.
If your application is CRUD-heavy — admin panels, back-office tools, content management systems — dirty checking pays for itself many times over. The entities that get read, modified, and saved thousands of times per day are the entities where the persistence context earns its keep. The benchmark overhead is 27%. The developer productivity gain is considerably larger.
Hibernate projections: having it both ways
Hibernate does offer a way to select specific columns, avoiding the SELECT * penalty. The CriteriaBuilder API supports constructor expressions that project into DTOs.
// Hibernate Reactive CriteriaBuilder projection — selecting specific columns.
// This is how you avoid SELECT * without dropping to raw SQL.
public Uni<List<OrderSummary>> findOrderSummaries(long customerId) {
return sessionFactory.withSession(session -> {
CriteriaBuilder cb = session.getCriteriaBuilder();
CriteriaQuery<OrderSummary> cq = cb.createQuery(OrderSummary.class);
Root<Order> root = cq.from(Order.class);
cq.select(cb.construct(
OrderSummary.class,
root.get("id"),
root.get("total"),
root.get("status")
));
cq.where(
cb.equal(root.get("customer").get("id"), customerId),
cb.equal(root.get("status"), "active")
);
return session.createQuery(cq).getResultList();
});
}
// This generates:
// SELECT o1_0.id, o1_0.total, o1_0.status
// FROM orders o1_0
// WHERE o1_0.customer_id = $1 AND o1_0.status = $2
//
// Same column set as the Reactive SQL Client version.
// But 18 lines of CriteriaBuilder to achieve what was 1 line of SQL.
// The CriteriaBuilder approach is type-safe and refactoring-friendly.
// Whether that justifies the verbosity is a matter of professional opinion.
// Mine is that it does not. The generated SQL matches the Reactive SQL Client version — same three columns, same index scan, same data transfer. But the code to achieve it is, I trust we can agree, rather more involved. Eighteen lines of CriteriaBuilder boilerplate to accomplish what one line of SQL does naturally. The CriteriaBuilder approach is compile-time type-safe, which is a genuine advantage when refactoring a large codebase. Whether that advantage justifies the verbosity is a question I shall leave to your professional judgment, having already disclosed mine.
Bulk operations: where the gap is widest
If there is one scenario where I would recommend the Reactive SQL Client without qualification, it is bulk data operations. INSERTs, UPDATEs, and DELETEs in quantities exceeding a few hundred rows. The persistence context was not designed for this, and it shows.
// Bulk INSERT — where the approaches diverge most sharply.
// Reactive SQL Client — PostgreSQL COPY protocol (fastest possible)
public Uni<Void> importOrders(List<OrderImport> orders) {
// Build a batch of prepared statement executions
List<Tuple> batch = orders.stream()
.map(o -> Tuple.of(o.customerId(), o.total(), o.status()))
.collect(Collectors.toList());
return client
.preparedQuery("INSERT INTO orders (customer_id, total, status) VALUES ($1, $2, $3)")
.executeBatch(batch)
.replaceWithVoid();
}
// executeBatch sends all 1,000 INSERTs in a single network message.
// PostgreSQL executes them in a single transaction.
// Total time: ~8.3 ms for 1,000 rows.
// Hibernate Reactive — persistence context overhead
public Uni<Void> importOrders(List<OrderImport> imports) {
return sessionFactory.withTransaction((session, tx) -> {
Uni<Void> chain = Uni.createFrom().voidItem();
for (int i = 0; i < imports.size(); i++) {
OrderImport imp = imports.get(i);
Order order = new Order();
order.setCustomerId(imp.customerId());
order.setTotal(imp.total());
order.setStatus(imp.status());
chain = chain.chain(() -> session.persist(order));
// Flush every 50 to avoid persistence context bloat
if (i % 50 == 49) {
chain = chain.chain(session::flush)
.chain(session::clear);
}
}
return chain.chain(session::flush);
});
}
// 20 flush cycles for 1,000 rows. Each flush: dirty check all managed
// entities, generate INSERT SQL, send to PostgreSQL, wait for response.
// Total time: ~14.7 ms — 77% slower than the batched approach. The Reactive SQL Client's executeBatch() sends all 1,000 INSERT statements in a single network message. PostgreSQL executes them in a single transaction, returns a single acknowledgment, and the operation completes in 8.3ms.
Hibernate Reactive must manage 1,000 entities in the persistence context, which means 1,000 dirty-checking registrations, 1,000 state snapshots for change detection at flush time, and growing memory pressure. The flush/clear pattern — flushing every 50 entities to keep the persistence context manageable — is standard Hibernate bulk import advice. It works. It is also 20 network round trips (one per flush cycle), each requiring the persistence context to compare every managed entity's current state against its snapshot, generate the INSERT SQL, send it, wait for the response, and then clear the context to free memory.
For data imports, ETL pipelines, event-sourced projections, or any batch operation that creates or modifies more than a few hundred rows, use the Reactive SQL Client. This is not a nuanced recommendation. The persistence context adds no value to bulk operations — you are not going to read-modify-write 10,000 rows through dirty checking — and its overhead scales linearly with the batch size.
"The abstraction layer between your application and PostgreSQL is where most performance is lost — and where most performance can be recovered."
— from You Don't Need Redis, Chapter 3: The ORM Tax
Transaction scope: who controls the boundary
// Transaction scope differences — a subtle but important distinction.
// Reactive SQL Client: explicit transaction boundaries
public Uni<OrderResult> placeOrder(OrderRequest request) {
return client.withTransaction(conn -> {
// Everything inside this lambda is one PostgreSQL transaction.
// You control exactly which queries participate.
return conn.preparedQuery("INSERT INTO orders (customer_id, total) VALUES ($1, $2) RETURNING id")
.execute(Tuple.of(request.customerId(), request.total()))
.onItem().transformToUni(rows -> {
long orderId = rows.iterator().next().getLong("id");
return conn.preparedQuery("INSERT INTO order_items (order_id, product_id, qty) VALUES ($1, $2, $3)")
.executeBatch(request.items().stream()
.map(i -> Tuple.of(orderId, i.productId(), i.quantity()))
.collect(Collectors.toList()))
.onItem().transform(r -> new OrderResult(orderId));
});
});
}
// Hibernate Reactive: session = transaction scope (typically)
public Uni<OrderResult> placeOrder(OrderRequest request) {
return sessionFactory.withTransaction((session, tx) -> {
Order order = new Order();
order.setCustomer(session.getReference(Customer.class, request.customerId()));
order.setTotal(request.total());
return session.persist(order)
.chain(session::flush) // flush to get the generated ID
.chain(() -> {
List<OrderItem> items = request.items().stream()
.map(i -> {
OrderItem item = new OrderItem();
item.setOrder(order);
item.setProduct(session.getReference(Product.class, i.productId()));
item.setQuantity(i.quantity());
return item;
})
.collect(Collectors.toList());
return session.persistAll(items.toArray());
})
.chain(session::flush)
.onItem().transform(v -> new OrderResult(order.getId()));
});
}
// More natural for complex entity graphs.
// But two flush() calls = two network round trips.
// The Reactive SQL Client version does it in one. The Reactive SQL Client gives you explicit control over the transaction boundary. Everything inside withTransaction() is one PostgreSQL transaction. You decide which queries participate. You can pipeline the order insert and the batch item insert for maximum throughput.
Hibernate Reactive ties the transaction to the session. The withTransaction() block wraps the entire session, and the flush() calls within it are the points where SQL actually reaches PostgreSQL. Two flushes means two network round trips within the transaction, even though both could have been batched.
For simple CRUD — create one entity, save it — the difference is negligible. For complex business operations that touch multiple tables, the Reactive SQL Client's ability to batch the entire operation into fewer round trips adds up. A service that creates an order, inserts 5 line items, updates inventory counts for 5 products, and logs an audit event can pipeline all of those into 2-3 network round trips with the Reactive SQL Client. Hibernate Reactive, with its flush-per-stage pattern, needs 4-6 round trips for the same operation.
Native mode: 50ms startup is real, but not equal
Quarkus's GraalVM native compilation is a headline feature, and for good reason. Both reactive approaches compile to native images with sub-100ms startup. But they do not start equally fast, and they do not consume equal memory.
# Standard JVM mode
$ java -jar target/quarkus-app/quarkus-run.jar
# Started in 1.247s (Hibernate Reactive)
# Started in 0.891s (Reactive SQL Client only)
# GraalVM native mode
$ ./target/my-app-runner
# Started in 0.048s (Hibernate Reactive)
# Started in 0.031s (Reactive SQL Client only)
# The 50ms startup is real. Cold start to accepting connections.
# Hibernate Reactive adds ~17ms of overhead even in native mode —
# that is the entity metadata, dirty checking setup, and
# persistence context initialization that build-time processing
# could NOT eliminate entirely. The Reactive SQL Client in native mode starts in roughly 31ms. Hibernate Reactive adds 24ms on top of that — landing at about 55ms. Still remarkable. Still faster than most frameworks manage in JVM mode. But the 24ms gap is real, and it comes from work that Quarkus's build-time processing genuinely cannot eliminate.
Quarkus does extraordinary work at build time: scanning entities, generating bytecode-enhanced accessors, pre-computing metadata, eliminating reflection. This is why Hibernate Reactive starts in 55ms instead of the 3-4 seconds a traditional Hibernate startup takes. The build-time processing removes roughly 98% of the initialization cost.
The remaining 2% is the runtime persistence context setup, the dirty-checking infrastructure that must be initialized per-session, and the query plan cache that starts empty and cannot be pre-filled. These are inherent to Hibernate's architecture. They are not overhead you can configure away.
# Memory footprint comparison — native mode, idle, after warmup.
# Reactive SQL Client only:
$ ps -o rss= -p $(pgrep my-app-runner)
# RSS: 38,412 KB (~37 MB)
# Hibernate Reactive:
$ ps -o rss= -p $(pgrep my-app-runner)
# RSS: 64,208 KB (~62 MB)
# Difference: 25 MB — entity metadata, bytecode-enhanced accessors,
# persistence context infrastructure, query plan cache.
#
# For a long-running service on a 4 GB instance: irrelevant.
# For a serverless function with 128 MB memory limit: the difference
# between fitting comfortably and being evicted by the OOM killer. The memory gap is more concerning than the startup gap for serverless deployments. A Reactive SQL Client native image uses roughly 37 MB at rest. Hibernate Reactive uses 62 MB — 25 MB more for entity metadata, bytecode-enhanced accessor infrastructure, persistence context data structures, and the query plan cache. For AWS Lambda or Google Cloud Run functions with 128 MB memory limits, that 25 MB is the difference between comfortable operation and intermittent OOM kills.
For long-running services — and most Quarkus applications are long-running services — startup time and base memory are footnotes. The service starts once, runs for weeks, and the 24ms startup difference is amortized over billions of requests. The 25 MB memory difference is rounding error on a 4 GB heap.
For serverless, both matter. A Reactive SQL Client function responding in 50ms total (31ms startup + 19ms query) versus a Hibernate Reactive function at 74ms total shifts your p50 noticeably when cold starts are frequent. If your serverless function starts more than once per minute, and your latency budget is under 100ms, the lighter option has a structural advantage.
The literal_handling_mode bug: build-time processing cannot save you
This is the issue that surprised me. Quarkus's build-time optimization is thorough — it pre-processes entity metadata, eliminates reflection, and compiles queries where possible. It is natural to assume that this processing also sanitizes query generation.
It does not.
// Hibernate Reactive inherits this from Hibernate ORM.
// When literal_handling_mode is AUTO (the default), Hibernate sometimes
// inlines literal values directly into SQL instead of using parameters:
// You write:
session.createQuery("FROM Order o WHERE o.status = :status", Order.class)
.setParameter("status", "active")
.getResultList();
// Expected SQL:
// SELECT id, total, status FROM orders WHERE status = $1
// Parameters: ['active']
// What Hibernate sometimes generates (literal_handling_mode = AUTO):
// SELECT id, total, status FROM orders WHERE status = 'active'
// No parameters. Literal value inlined.
// Why this matters:
// 1. PostgreSQL cannot reuse the query plan — every distinct value = new plan
// 2. pg_stat_statements sees each variation as a separate query
// 3. Prepared statement caching at the connection level is defeated
// Fix: force BIND_PARAMETERS in application.properties
quarkus.hibernate-orm.query.literal-handling-mode=bind
// Quarkus build-time processing does NOT fix this.
// It optimizes metadata scanning and bytecode enhancement,
// but literal handling is a runtime query translation decision. When Hibernate's literal_handling_mode is set to AUTO (the default), the query translator may inline literal values directly into the generated SQL instead of using bind parameters. This is a deliberate Hibernate ORM behavior, carried over into Hibernate Reactive unchanged.
The consequences for PostgreSQL are significant.
Query plan cache thrashing. PostgreSQL caches query plans keyed by the SQL text. WHERE status = $1 with different parameter values reuses the same plan. WHERE status = 'active' and WHERE status = 'shipped' are two different queries with two different plans. Ten status values means ten cached plans for what should be one query. This is not merely wasteful — it actively degrades plan cache efficiency for your entire database, because cache entries for these duplicate plans displace entries for genuinely distinct queries.
pg_stat_statements pollution. Each literal variation appears as a separate entry. Instead of one row showing 50,000 calls, you get hundreds of rows with a few hundred calls each. Your monitoring dashboard becomes unreadable. Your top-N query analysis misses the real hotspots.
-- What pg_stat_statements looks like with literal_handling_mode = AUTO.
-- Each inlined value creates a separate entry.
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
WHERE query LIKE '%orders%status%'
ORDER BY total_exec_time DESC;
-- query | calls | mean_exec_time | total_exec_time
-- ---------------------------------------------------------+-------+----------------+----------------
-- SELECT ... FROM orders o1_0 WHERE ... status = 'active' | 8421 | 0.087 | 733.2
-- SELECT ... FROM orders o1_0 WHERE ... status = 'shipped'| 6102 | 0.091 | 555.3
-- SELECT ... FROM orders o1_0 WHERE ... status = 'pending'| 3847 | 0.084 | 323.1
-- SELECT ... FROM orders o1_0 WHERE ... status = 'cancel' | 1203 | 0.089 | 107.1
-- SELECT ... FROM orders o1_0 WHERE ... status = 'return' | 891 | 0.092 | 81.9
-- Five entries for what should be ONE parameterized query.
-- Total calls: 20,464 | Combined exec time: 1,800.6 ms
-- With bind parameters, this would be:
-- SELECT ... FROM orders o1_0 WHERE ... status = $1 | 20464 | 0.088 | 1800.6
-- One entry. One query plan. One line in your monitoring dashboard.
-- The performance is identical. The observability is not. Prepared statement defeat. Server-side prepared statements cache the parse tree for a specific SQL string. Inlined literals produce different strings every time. The prepared statement cache at both the Vert.x layer and the PostgreSQL layer is bypassed. Every query variant incurs a fresh parse and plan step — precisely the work that prepared statements exist to avoid.
The fix is one line in application.properties:
quarkus.hibernate-orm.query.literal-handling-mode=bind
This forces Hibernate to always use bind parameters. It should arguably be the default. It is not. And Quarkus's build-time processing, despite its sophistication, operates on entity metadata and bytecode — not on runtime query translation decisions. This is a runtime behavior that survives native compilation entirely intact.
The Reactive SQL Client does not have this problem because you write the SQL yourself. $1 is $1. There is no translator to second-guess your parameterization. When you control the SQL, you control the plan cache behavior. When you delegate the SQL to an ORM, you inherit its decisions — including the ones documented nowhere in its getting-started guide.
When Hibernate Reactive is the right choice: an honest assessment
I have spent considerable ink documenting the Reactive SQL Client's advantages. In the interest of the balanced perspective that professional courtesy demands, I shall now make the case for Hibernate Reactive — genuinely, not as a rhetorical exercise before dismissing it.
Complex domain models. If your application has 30-50 entities with deep relationship graphs — orders with line items with products with categories with suppliers — the Reactive SQL Client requires you to write and maintain every JOIN, every mapping, every foreign key traversal by hand. For a domain of that size, the manual mapping code can easily exceed the entity code in both volume and bug surface area. Hibernate's automatic relationship management, cascade operations, and lazy loading are not performance features. They are correctness features. They ensure that the Order entity always has the right Customer reference, that deleting a Customer cascades to their Orders, and that the object graph in memory is consistent with the database. Implementing these guarantees manually is possible. Implementing them correctly across 50 entities, under concurrent access, with optimistic locking, is a full-time job.
Team expertise. A team of six Java developers with ten years of collective JPA experience will ship faster with Hibernate Reactive than with the Reactive SQL Client. The Vert.x API is not difficult to learn, but it is unfamiliar. The row mapping boilerplate, the manual SQL, the explicit transaction management — each of these is a source of bugs during the learning curve. If your team is already productive with JPA and your application does not have microsecond latency requirements, the 58-80% overhead per query is worth less than the velocity you would lose during the transition.
Evolving schemas. Hibernate's entity-first approach means your Java code is the source of truth for the database schema. Add a field to an entity, Hibernate adds the column. Rename a relationship, Hibernate updates the foreign key. For applications in rapid early development — where the schema changes weekly — this tight coupling between code and schema reduces the coordination cost between database migrations and application code. The Reactive SQL Client requires you to keep your SQL strings, your Flyway migrations, and your Java records in sync manually. With three people working on three features, that synchronization becomes a daily merge conflict.
Portability. If you genuinely need to support multiple databases (and most applications do not, but some enterprise contexts require it), Hibernate's database-agnostic HQL generates the correct SQL dialect for PostgreSQL, MySQL, Oracle, and SQL Server. The Reactive SQL Client speaks PostgreSQL's native protocol and only PostgreSQL. Switching databases requires rewriting every query. This is rarely a practical concern, but when it is, it dominates the decision.
I do not make these arguments lightly. The Reactive SQL Client is objectively faster, uses fewer connections, and gives you more control. But faster does not always mean better. The developer who ships a correct application on Friday with Hibernate Reactive has done better work than the developer who ships a faster but buggier application on the following Wednesday with the Reactive SQL Client.
When to choose which: a decision framework
The benchmarks favor the Reactive SQL Client on every performance axis. That does not make it the right choice for every project. Performance is one variable. Here are the others.
| Factor | Reactive SQL Client | Hibernate Reactive | Edge |
|---|---|---|---|
| Raw throughput | Higher | Lower | Reactive SQL Client |
| Query pipelining | Native support | Not available | Reactive SQL Client |
| Entity relationships | Manual JOINs + mapping | Automatic | Hibernate Reactive |
| Schema migrations | Flyway/Liquibase only | Auto-DDL + Flyway | Hibernate Reactive |
| Dirty checking / unit of work | None — manual updates | Automatic | Hibernate Reactive |
| Native image startup | ~31 ms | ~55 ms | Reactive SQL Client |
| Team familiarity (JPA) | New API to learn | Standard JPA | Hibernate Reactive |
| Query plan predictability | Full control | literal_handling_mode risk | Reactive SQL Client |
| Connection pool efficiency | 5-10x fewer connections | Standard pool sizing | Reactive SQL Client |
| Optimistic locking | Manual version checks | Automatic @Version | Hibernate Reactive |
| Bulk operations | executeBatch / COPY | Flush/clear cycles | Reactive SQL Client |
| Read-modify-write cycles | Manual SQL | Automatic — 27% overhead | Depends on volume |
Choose the Reactive SQL Client when: you are building a high-throughput service where latency matters at the millisecond level. Microservices with narrow, well-defined data access patterns. APIs that serve denormalized data. Anything where you would write SQL by hand anyway. Event-driven architectures where pipelining multiple independent lookups is the common pattern. Serverless functions where startup time and memory footprint determine your cloud bill. Bulk data processing where the persistence context adds overhead without adding value.
Choose Hibernate Reactive when: your domain model has complex entity relationships that would be painful to manage with manual JOINs and row mapping. CRUD-heavy applications where dirty checking saves significant boilerplate. Teams with deep JPA expertise who would lose velocity learning the Vert.x API. Applications where the 60-80% overhead on individual queries is acceptable because you are not latency-sensitive at that granularity. Early-stage applications where the schema evolves weekly and entity-driven development reduces coordination cost.
Use both in the same application: this is the option Quarkus documentation mentions but few teams consider seriously. It is, in my professional opinion, often the correct answer.
// Using both approaches in the same Quarkus application.
// Admin panel uses Hibernate Reactive. Hot path uses the SQL client.
@ApplicationScoped
public class OrderService {
@Inject PgPool pgPool; // Reactive SQL Client
@Inject Mutiny.SessionFactory sessionFactory; // Hibernate Reactive
// Hot path — 10,000+ req/s. Pipelining. Minimal overhead.
public Uni<DashboardData> getDashboard(long userId) {
Uni<RowSet<Row>> orders = pgPool
.preparedQuery("SELECT id, total, created_at FROM orders WHERE user_id = $1 ORDER BY created_at DESC LIMIT 10")
.execute(Tuple.of(userId));
Uni<RowSet<Row>> stats = pgPool
.preparedQuery("SELECT count(*), sum(total) FROM orders WHERE user_id = $1")
.execute(Tuple.of(userId));
return Uni.combine().all().unis(orders, stats).asTuple()
.onItem().transform(t -> buildDashboard(t.getItem1(), t.getItem2()));
}
// Admin panel — 50 req/s. Entity relationships. CRUD convenience.
public Uni<Order> updateOrderAdmin(long orderId, OrderUpdateRequest req) {
return sessionFactory.withTransaction((session, tx) ->
session.find(Order.class, orderId)
.onItem().ifNotNull().transformToUni(order -> {
order.setStatus(req.status());
order.setShippingAddress(req.address());
order.setNotes(req.notes());
// Dirty checking handles the UPDATE
return session.flush().replaceWith(order);
})
);
}
}
// Both share the same underlying PostgreSQL connection pool configuration.
// Both use the same Vert.x event loop.
// The constraint: do not mix them in a single transaction. Use Hibernate Reactive for the admin panel and CRUD endpoints where developer productivity matters most. Use the Reactive SQL Client for the hot path — the search endpoint, the analytics aggregation, the dashboard loader — where pipelining and raw throughput justify the manual mapping. The hot path is typically 3-5 endpoints handling 80% of your traffic. The CRUD surface is typically 20-30 endpoints handling 20% of your traffic. Optimizing the 3-5 endpoints that matter while keeping the 20-30 endpoints easy to maintain is not a compromise. It is good engineering.
Quarkus supports this without conflict. Both share the same underlying connection pool configuration. The only constraint is that you should not mix them within a single transaction, since the Hibernate session and the raw PgPool connection are separate objects with separate transaction scopes.
What Gold Lapel does with queries from either approach
Gold Lapel operates as a PostgreSQL proxy. It sits between your Quarkus application and the database, intercepts every query on the wire protocol, and applies optimizations transparently. It does not know or care whether a query came from the Vert.x client or from Hibernate Reactive. A query is a query.
# Gold Lapel works with both approaches. No query changes.
# Add the Maven dependency and GL handles the connection transparently.
# application.properties — Reactive SQL Client
quarkus.datasource.reactive.url=postgresql://localhost:5433/mydb
# GL proxy listens on 5433, forwards to PostgreSQL on 5432
# application.properties — Hibernate Reactive
quarkus.datasource.reactive.url=postgresql://localhost:5433/mydb
quarkus.hibernate-orm.database.generation=none
# Same connection string. Both approaches.
# GL sees every query — whether hand-written SQL from the Vert.x client
# or generated HQL from Hibernate Reactive — and optimizes accordingly.
#
# Auto-indexing: GL detects unindexed WHERE clauses from either approach
# Query rewriting: GL rewrites inefficient patterns regardless of origin
# Materialized views: GL identifies repeated aggregations and caches them
# Connection pooling: GL manages the PostgreSQL side; Quarkus manages the app side This is particularly relevant for the two approaches discussed here.
For the Reactive SQL Client, Gold Lapel catches the queries you wrote yourself. If a WHERE clause hits an unindexed column, Gold Lapel creates the index. If an aggregation runs repeatedly, Gold Lapel materializes the result. If a query can be rewritten to use an existing index more effectively, Gold Lapel rewrites it. Your hand-written SQL gets the optimization attention it deserves — the attention that you, understandably, do not have time to give every query in a codebase of five hundred endpoints.
For Hibernate Reactive, Gold Lapel catches the SQL that Hibernate generated — including the queries affected by literal_handling_mode.
-- Gold Lapel normalizes Hibernate's literal-inlined queries automatically.
-- Even without literal-handling-mode=bind, GL recognizes the pattern.
-- Hibernate sends these five distinct SQL strings:
-- SELECT ... WHERE status = 'active'
-- SELECT ... WHERE status = 'shipped'
-- SELECT ... WHERE status = 'pending'
-- SELECT ... WHERE status = 'cancelled'
-- SELECT ... WHERE status = 'returned'
-- Gold Lapel's query fingerprinting sees one pattern:
-- SELECT ... WHERE status = ?
-- Auto-index analysis runs once. Query stats aggregate correctly.
-- The literal_handling_mode bug still wastes PostgreSQL's plan cache,
-- but GL's optimization layer is not fooled by it.
-- Fix the setting anyway. GL catches the symptoms, but the disease
-- is still burning through plan cache entries on the PostgreSQL side. Even if you have not set literal-handling-mode=bind, Gold Lapel normalizes query patterns and identifies that WHERE status = 'active' and WHERE status = 'shipped' are the same query shape. The auto-indexing works regardless of whether parameters are bound or inlined. Fix the setting anyway — Gold Lapel catches the symptoms at the proxy layer, but the disease is still burning through plan cache entries on the PostgreSQL side.
Pipelined or sequential, hand-written or generated, parameterized or literal — every query that reaches PostgreSQL passes through Gold Lapel first. The optimization is the same. The developer effort is zero.
Your persistence strategy is a choice about developer ergonomics, team expertise, and application architecture. The database optimization should not depend on which choice you made. With Gold Lapel, it does not.
Final thoughts from the service entrance
If you have read this far, you have the information you need. Allow me a brief observation before I see you to the door.
The reactive ecosystem in Java has produced remarkable engineering. Quarkus's build-time processing, Vert.x's event loop, Mutiny's reactive types, Hibernate Reactive's adaptation of a blocking API to a non-blocking world — each of these is a significant achievement. The fact that you can run a Java application with 55ms cold-start and sub-millisecond query latency in a native image is extraordinary. It was not possible five years ago.
But the choice between the Reactive SQL Client and Hibernate Reactive is not a choice between good and bad engineering. It is a choice between two well-engineered tools optimized for different constraints. The Reactive SQL Client optimizes for throughput and control. Hibernate Reactive optimizes for developer productivity and domain modeling. Neither is wrong. Both are opinionated.
Choose based on what your application needs, not on what the benchmarks look like in a blog post. The benchmarks tell you the cost of each approach. Only you know the value.
Should you require assistance with the queries themselves — whichever approach generates them — the household staff is available. We attend to PostgreSQL. The framework is your affair.
Frequently asked questions
Terms referenced in this article
Whichever reactive path you choose, the queries it generates will benefit from the same indexing discipline. The Spring Boot materialized views chapter may seem an odd recommendation for a Quarkus shop, but the PostgreSQL patterns it demonstrates are JVM-universal — only the framework wiring differs.