← Spring Boot & Java Frameworks

JDBI and PostgreSQL: The Lightweight Persistence Layer That Lets Postgres Be Postgres

No session cache. No identity map. No LazyInitializationException. Just your SQL, bound correctly, running against the database you chose for a reason.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 38 min read
The illustrator proposed a fifteen-layer architectural diagram. We returned it with a polite note suggesting fewer layers.

Good evening. May I take your ORM?

There is a particular kind of Java developer who has reached a quiet conclusion. They have spent years with Hibernate. They have configured hibernate.jdbc.batch_size and hibernate.order_inserts. They have debugged the QueryPlanCache memory leak that shows up when you pass variable-length IN clauses. They have traced a 4-second API response to a LazyInitializationException that was silently swallowed by a framework somewhere between the controller and the serializer. They have written @Transactional(readOnly = true) on a method that was already read-only, because Spring's OSIV filter was holding the Hibernate Session open across the entire HTTP request and the dirty-check flush at commit time was scanning 200 managed entities for changes that did not exist.

These developers do not hate Hibernate. They simply no longer need what it provides. They need to run SQL against PostgreSQL and get results back. They need the SQL they write to be the SQL that executes. They need to stop debugging the persistence layer and start building the product.

JDBI is what they find when they go looking.

If you will permit me, I should like to spend some time with this library — not as a quick overview, but as a thorough examination of what it provides, what it deliberately omits, and why those omissions are, for a certain class of application, the most important feature on the list. We will look at real SQL, real execution plans, real performance numbers, and the specific, named failure modes that JDBI is structurally immune to. We will also look at where JDBI falls short, because a recommendation without honest boundaries is not a recommendation. It is a sales pitch. And I have never been in sales.

What JDBI is, and what it deliberately is not

JDBI is a SQL convenience library for Java. Not an ORM. Not a framework. A library. The distinction matters, and it is worth dwelling on for a moment, because the Java ecosystem has spent two decades conflating "database access" with "object-relational mapping," and the conflation has caused a great deal of unnecessary suffering.

An ORM manages the relationship between your object model and your relational schema. It generates SQL. It tracks which objects have changed since they were loaded. It maintains a cache of entities within a session, so that loading the same row twice returns the same Java object — the identity map guarantee. It creates proxy objects for relations you haven't loaded yet, so it can fetch them lazily if you touch them — or throw LazyInitializationException if the session has closed. It maintains a QueryPlanCache that maps HQL strings to parsed query trees, which can consume hundreds of megabytes if your queries have variable-length parameter lists.

These are not flaws. They are features, designed for a specific kind of application: one with a complex domain model, deep object graphs, and a team that wants to think in objects rather than tables. When that description fits, Hibernate is genuinely powerful software. I would not suggest otherwise.

But a great many applications do not fit that description. They are services. They receive requests. They run SQL. They return results. The object graph is flat. The domain model is a handful of DTOs. The most complex thing the persistence layer does is a three-table join with a WHERE clause and a LIMIT. For these applications, an ORM's features are not merely unused — they are liabilities, because every feature carries maintenance cost, debugging cost, and runtime cost, whether you use it or not.

JDBI provides exactly what these applications need:

  • Parameter binding — named parameters (:status) instead of positional (?). Readable, safe, and unambiguous when your query has twelve parameters.
  • Result mapping — map ResultSet rows to Java objects by column name, constructor, or custom mapper. No entity metadata. No @Column annotations.
  • Resource management — connections, statements, and result sets are opened and closed correctly, every time, via Handle and try-with-resources patterns. This sounds trivial. It is not. JDBC resource leaks are the number one cause of connection pool exhaustion in production Java applications, and JDBI eliminates the entire class of bugs.
  • Transaction handling — explicit begin/commit/rollback with savepoint support. You know when a transaction starts. You know when it ends. There is no proxy-based interception that might or might not be active depending on caller context.
  • Plugin system — the jdbi3-postgres module adds type mappers for PostgreSQL-native types. One line of code.

That is the complete list. There is no session cache. There is no identity map. There is no change tracking. There is no lazy loading. There is no second-level cache. There is no persistence.xml. The startup time for a JDBI application is the time it takes to create a DataSource and call Jdbi.create() — typically under 100 milliseconds, compared to the 3-15 seconds a Hibernate SessionFactory needs to scan annotations, validate mappings, and build its metadata model.

JDBI — query and bind
import org.jdbi.v3.core.Jdbi;
import org.jdbi.v3.postgres.PostgresPlugin;

// Create a JDBI instance with the Postgres plugin
Jdbi jdbi = Jdbi.create(dataSource)
    .installPlugin(new PostgresPlugin());

// That's it. No SessionFactory. No EntityManagerFactory.
// No persistence.xml. No hibernate.cfg.xml.
// One object. One plugin. Ready.

// Write a query. Get results.
List<Order> orders = jdbi.withHandle(handle ->
    handle.createQuery("""
        SELECT o.id, o.status, o.total, o.created_at,
               c.name AS customer_name
        FROM orders o
        JOIN customers c ON c.id = o.customer_id
        WHERE o.status = :status
          AND o.created_at >= :since
        ORDER BY o.created_at DESC
        LIMIT :limit
        """)
        .bind("status", "pending")
        .bind("since", LocalDate.of(2025, 1, 1))
        .bind("limit", 500)
        .mapTo(Order.class)
        .list()
);

// The SQL you wrote is the SQL that runs.
// No query plan cache. No session cache.
// No LazyInitializationException waiting in the wings.

The SQL you write is a string. JDBI binds your parameters into it, sends it to PostgreSQL, and maps the results. No query rewriting. No plan caching on the Java side. No implicit join fetching. If you want a join, you write JOIN. If you want a subquery, you write a subquery. If you want to use a PostgreSQL-specific function like generate_series() or jsonb_agg() or row_number() OVER, you just write it. The SQL is yours. JDBI merely ensures it reaches PostgreSQL intact.

The SQL Object API: declarative without the baggage

Writing SQL strings in Java code makes some people uncomfortable. I understand this. There is a widespread intuition that SQL should be generated, abstracted, hidden — that touching SQL directly is somehow retrograde, the way one might regard a household that still answers the door manually rather than installing an intercom.

I would suggest that the intercom has been rather unreliable.

JDBI accommodates the preference for declarative interfaces with its SQL Object API — an interface-based approach that looks superficially like a Spring Data repository but carries none of the runtime complexity.

SQL Object API — interface-based DAO
// JDBI's SQL Object API — declarative, still lightweight
public interface OrderDao {

    @SqlQuery("""
        SELECT o.id, o.status, o.total, o.created_at,
               c.name AS customer_name
        FROM orders o
        JOIN customers c ON c.id = o.customer_id
        WHERE o.status = :status
        ORDER BY o.created_at DESC
        LIMIT :limit
        """)
    List<Order> findByStatus(@Bind("status") String status,
                             @Bind("limit") int limit);

    @SqlUpdate("""
        UPDATE orders
        SET status = :newStatus, updated_at = now()
        WHERE id = :id
        """)
    int updateStatus(@Bind("id") long id,
                     @Bind("newStatus") String newStatus);

    @SqlBatch("""
        INSERT INTO order_items (order_id, product_id, quantity, price)
        VALUES (:orderId, :productId, :quantity, :price)
        """)
    void insertItems(@BindBean List<OrderItem> items);
}

// Attach it to a handle — no proxy generation at startup,
// no classpath scanning, no annotation processing phase
OrderDao dao = handle.attach(OrderDao.class);
List<Order> pending = dao.findByStatus("pending", 100);

The SQL is still right there in the annotation. No method-name-to-query derivation. No findByStatusAndCreatedAtGreaterThanOrderByCreatedAtDesc method names that break when you rename a field and do not tell you until runtime. No query generation from method signatures. You write the SQL. JDBI binds the parameters. PostgreSQL runs it.

@SqlBatch generates a JDBC batch for the list, which PostgreSQL executes as a single network round-trip. Hibernate can batch too, but it requires hibernate.jdbc.batch_size configuration, hibernate.order_inserts=true, and careful attention to identity generation strategy — IDENTITY columns disable batching entirely because Hibernate needs the generated ID back immediately for its persistence context. This is documented in the Hibernate User Guide, but the number of teams who have discovered it via a production performance regression suggests the documentation is not where people look first.

JDBI has no persistence context, so it has no such constraint. @SqlBatch batches. That is what it does.

How does this compare to Spring Data JPA directly?

Since the comparison is inevitable, allow me to make it explicit.

Spring Data JPA vs JDBI SQL Object — side by side
// Spring Data JPA — method name query derivation
public interface OrderRepository extends JpaRepository<Order, Long> {
    List<Order> findByStatusAndCreatedAtGreaterThanOrderByCreatedAtDesc(
        String status, LocalDate since);
}
// What SQL does this generate? Run it to find out.
// Rename 'status' to 'orderStatus'? Breaks at runtime. Silently.

// Spring Data JPA — @Query with JPQL (not SQL)
@Query("SELECT o FROM Order o JOIN o.customer c "
     + "WHERE o.status = :status ORDER BY o.createdAt DESC")
List<Order> findByStatusWithCustomer(@Param("status") String status);
// JPQL — no PostgreSQL functions, no window functions,
// no CTEs, no jsonb operators. And it hits the QueryPlanCache.

// JDBI SQL Object — the actual SQL
@SqlQuery("""
    SELECT o.id, o.status, o.total, o.created_at,
           c.name AS customer_name,
           jsonb_agg(jsonb_build_object(
               'product', p.name, 'qty', oi.quantity
           )) AS items
    FROM orders o
    JOIN customers c ON c.id = o.customer_id
    LEFT JOIN order_items oi ON oi.order_id = o.id
    LEFT JOIN products p ON p.id = oi.product_id
    WHERE o.status = :status
    GROUP BY o.id, c.name
    ORDER BY o.created_at DESC
    LIMIT :limit
    """)
List<OrderWithItems> findByStatusWithItems(
    @Bind("status") String status,
    @Bind("limit") int limit);
// jsonb_agg, GROUP BY, window functions, CTEs — all available.
// Because it's SQL. PostgreSQL's SQL.

The JDBI version uses jsonb_agg() to nest order items into the parent query — a single SQL statement, a single round trip, no N+1 problem. The Spring Data JPA version cannot express this in JPQL. You would need a native query, which gives you raw SQL but loses the entity mapping, the persistence context integration, and the type safety that were the reasons you chose JPA in the first place.

This is the fundamental tension. JPA provides a portable query language that works across databases, but portability costs access to the database you actually chose. JDBI provides no portability and full access. For teams committed to PostgreSQL — which, in my experience, is most teams who have chosen PostgreSQL deliberately — the trade-off resolves cleanly.

Row mapping and column mapping: teaching JDBI your types

JDBI's result mapping is deliberately simple. By default, it maps columns to Java object fields by matching column names to field names, with underscore-to-camelCase conversion. For most DTOs, this is sufficient. For cases where it is not, JDBI provides two extension points: RowMapper and ColumnMapper.

Custom RowMapper — explicit control over result mapping
// Custom RowMapper — when column names don't match fields,
// or when you need to map a joined result to nested objects

public class OrderWithCustomerMapper implements RowMapper<OrderWithCustomer> {
    @Override
    public OrderWithCustomer map(ResultSet rs, StatementContext ctx)
            throws SQLException {
        return new OrderWithCustomer(
            rs.getLong("order_id"),
            rs.getString("status"),
            rs.getBigDecimal("total"),
            rs.getTimestamp("created_at").toInstant(),
            new CustomerSummary(
                rs.getLong("customer_id"),
                rs.getString("customer_name"),
                rs.getString("customer_email")
            )
        );
    }
}

// Register globally, or use inline with a lambda
List<OrderSummary> summaries = handle.createQuery("""
    SELECT o.id, o.status, o.total,
           count(oi.id) AS item_count
    FROM orders o
    LEFT JOIN order_items oi ON oi.order_id = o.id
    WHERE o.created_at >= :since
    GROUP BY o.id
    ORDER BY o.total DESC
    LIMIT 50
    """)
    .bind("since", LocalDate.of(2025, 1, 1))
    .map((rs, ctx) -> new OrderSummary(
        rs.getLong("id"),
        rs.getString("status"),
        rs.getBigDecimal("total"),
        rs.getInt("item_count")
    ))
    .list();

The lambda form is worth noting. For one-off queries where the result shape does not justify a dedicated class, you map the ResultSet inline. No entity. No annotations. No reflection. A function that takes a ResultSet and returns an object.

ColumnMapper and ArgumentFactory: custom types everywhere

If your domain has value types — Money, EmailAddress, UserId — you register a ColumnMapper and ArgumentFactory once, and JDBI handles them everywhere. This is the extensibility model: global registration, not per-field annotation.

ColumnMapper and ArgumentFactory — global type handling
// ColumnMapper — teach JDBI a custom type, used everywhere
public class MoneyMapper implements ColumnMapper<Money> {
    @Override
    public Money map(ResultSet rs, int col, StatementContext ctx)
            throws SQLException {
        long cents = rs.getLong(col);
        if (rs.wasNull()) return null;
        return Money.ofCents(cents);
    }
}

jdbi.registerColumnMapper(Money.class, new MoneyMapper());

// Now any query mapping to a class with a Money field just works:
@SqlQuery("SELECT id, total FROM orders WHERE id = :id")
Order findById(@Bind("id") long id);
// If Order has a 'Money total' field, MoneyMapper handles it.

// ArgumentFactory — teach JDBI to bind a custom type
public class MoneyArgumentFactory extends AbstractArgumentFactory<Money> {
    protected MoneyArgumentFactory() {
        super(Types.BIGINT);
    }

    @Override
    protected Argument build(Money value, ConfigRegistry config) {
        return (pos, stmt, ctx) -> stmt.setLong(pos, value.toCents());
    }
}

jdbi.registerArgument(new MoneyArgumentFactory());

// Now bind Money directly in any query:
handle.createUpdate("UPDATE orders SET total = :total WHERE id = :id")
    .bind("total", Money.ofDollars(49, 99))
    .bind("id", orderId)
    .execute();

Compare this to Hibernate's approach. In Hibernate 5, custom types required implementing the UserType interface — 80-120 lines of boilerplate per type, with methods like nullSafeGet, nullSafeSet, deepCopy, isMutable, assemble, disassemble, and replace. Hibernate 6 improved this significantly with @JdbcTypeCode, but the registration is still annotation-per-entity-field. JDBI's model is closer to how a well-run household operates: teach the staff once how to handle a particular item, and it is handled correctly everywhere thereafter.

What you escape by choosing JDBI

This is not theoretical. These are specific, named problems that JDBI is structurally immune to — not because it solves them cleverly, but because it lacks the machinery that creates them. I have encountered each of these in production systems. Some of them more than once, which is the kind of repetition that prompts one to reconsider the foundations.

The QueryPlanCache memory leak

Hibernate maintains a QueryPlanCache that stores parsed representations of HQL and Criteria queries. The cache is keyed by the full query string, including parameter placeholders. If your application builds queries with variable-length IN clauses — WHERE id IN (:ids) where :ids might be 5 items or 500 — each distinct parameter count produces a unique cache entry. Each entry contains the parsed HQL AST, the generated SQL, and the parameter metadata. For a complex query, a single entry can consume 20-50 KB of heap.

Applications with high cardinality IN clauses have reported QueryPlanCache sizes exceeding 500 MB. Hibernate 5.2.12 added hibernate.query.plan_cache_max_size (default: 2048 entries), but by then the pattern was well-established in production heaps. The full story of this failure mode is worth reading if you have not encountered it yet — and worth reviewing if you have, because the fixes are more nuanced than reducing the cache size.

JDBI has no query plan cache. It sends the SQL string to PostgreSQL every time. PostgreSQL has its own plan cache (pg_prepared_statements), which is bounded, well-understood, and observable via pg_stat_statements. The caching happens where it should — in the database that understands query plans.

LazyInitializationException

This exception occurs when you access a lazy-loaded relation on a Hibernate entity after the Session that loaded it has been closed. It is the single most common Hibernate exception in production applications. The Spring Open Session in View (OSIV) pattern exists primarily to avoid it — by keeping the Hibernate Session open for the entire HTTP request, so that lazy relations can be loaded during view rendering.

OSIV has its own problems. It holds a database connection for the full request duration, including template rendering, JSON serialization, and response compression. Under load, this exhausts the connection pool. The Vlad Mihalcea article on OSIV calls it an anti-pattern. Spring Boot enables it by default anyway, with a warning log message that most teams either miss or suppress.

JDBI has no lazy loading. If you want related data, you write a JOIN. If you do not want related data, you do not write a JOIN. There is no proxy object waiting to ambush you after the connection has been returned to the pool. The data you have is the data you asked for. The data you did not ask for does not appear. This is, I would suggest, how a well-ordered system should behave.

The N+1 query problem via lazy relations

In Hibernate, iterating over a collection of entities and accessing a lazy relation on each one produces N additional SQL queries — one per parent entity. This is the classic N+1 problem, and it is a natural consequence of transparent lazy loading. Loading 100 orders and accessing order.getCustomer().getName() on each one produces 1 query for the orders and 100 queries for the customers. The total round-trip time for 101 queries, even on a local network, can exceed 200ms.

The fixes in Hibernate are well-known: @BatchSize, @Fetch(FetchMode.SUBSELECT), or rewriting with JOIN FETCH. Each fix requires understanding Hibernate's fetching strategies and their interactions with the first-level cache, and each has edge cases — JOIN FETCH with pagination, for instance, silently disables LIMIT and fetches the entire result set into memory.

In JDBI, the query returns what the query selects. If you want the customer name, you join the customers table. If you want order items, you write a second query or use jsonb_agg() to nest them in the first. The SQL is visible. The execution plan is predictable. There is no implicit second query hiding behind a .getCustomer() call.

Dirty checking overhead on read-only paths

When a Hibernate transaction commits, the Session performs a flush. During the flush, it compares every managed entity against a snapshot taken when the entity was first loaded. For 200 entities with 15 fields each, that is 3,000 field comparisons — on every transaction commit, including read-only transactions that changed nothing. The @Transactional(readOnly = true) annotation can disable this, but only if the underlying transaction manager propagates the read-only hint to the JDBC connection, which depends on your DataSource implementation and Spring version.

In practice, dirty checking on read-only transactions adds 2-8ms per request, depending on the number of managed entities. On a service handling 2,000 requests per second, that is 4-16 CPU-seconds per second spent comparing fields that have not changed. The overhead is invisible in profilers because it is distributed across every request — no single request triggers an alert, but the aggregate CPU cost is real.

JDBI has no managed entities. There is nothing to dirty-check. A read-only request allocates zero overhead for change tracking.

Hibernate vs JDBI — the operational difference
// Hibernate — what happens behind a simple .find()
Session session = sessionFactory.openSession();
Order order = session.find(Order.class, 42L);
// 1. Check first-level cache (Session)
// 2. Check second-level cache (if enabled)
// 3. Generate SQL from entity metadata
// 4. Execute SQL
// 5. Hydrate entity from ResultSet
// 6. Register in persistence context for dirty checking
// 7. Create proxy objects for lazy relations
// 8. Add to QueryPlanCache (key: HQL/JPQL string + parameter types)
//
// order.getCustomer().getName();
// 9. Intercept proxy access
// 10. Check if session is still open (or throw LazyInitializationException)
// 11. Generate SQL for the relation
// 12. Execute SQL
// 13. Hydrate related entity
// 14. Register in persistence context
// 15. Return the name

// JDBI — what happens behind the same operation
Order order = jdbi.withHandle(h ->
    h.createQuery("SELECT * FROM orders WHERE id = :id")
     .bind("id", 42L)
     .mapTo(Order.class)
     .findOne()
     .orElse(null)
);
// 1. Bind parameter
// 2. Execute SQL
// 3. Map ResultSet columns to object fields
// That's the list. There is no step 4.

Fifteen steps versus three. The difference is not about speed — the individual steps are fast. The difference is about complexity. Fifteen steps means fifteen potential failure points. Three steps means three. When it is 3 AM and something has gone wrong, the system with fewer moving parts is the system you can diagnose.

PostgreSQL-native types via the jdbi3-postgres module

JDBC was designed as a lowest-common-denominator database API. It maps everything to java.sql types — strings, numbers, dates, blobs. PostgreSQL has dozens of types that JDBC has no representation for: uuid, inet, cidr, hstore, interval, jsonb, macaddr, int4range, tsvector.

Hibernate handles some of these via custom UserType implementations (80-120 lines of boilerplate per type in Hibernate 5) or the improved @JdbcTypeCode annotation in Hibernate 6. The registration model is still annotation-per-entity-field, which means if you have uuid primary keys on 40 entities, you annotate 40 fields.

JDBI's jdbi3-postgres module registers type mappers globally with a single line: jdbi.installPlugin(new PostgresPlugin()). After that, you bind UUID, InetAddress, Period, Map<String, String> (for hstore), and JsonNode directly as parameters. The plugin handles the PostgreSQL-specific PGobject wrapping behind the scenes.

Native PostgreSQL types with the PostgresPlugin
import org.jdbi.v3.postgres.PostgresPlugin;
import java.net.InetAddress;
import java.time.Period;
import java.util.UUID;

// The jdbi3-postgres module registers type mappers for
// PostgreSQL-specific types that JDBC doesn't handle natively

jdbi.installPlugin(new PostgresPlugin());

// UUID — stored as native uuid, not varchar(36)
handle.createUpdate("""
    INSERT INTO users (id, email, metadata)
    VALUES (:id, :email, :metadata)
    """)
    .bind("id", UUID.randomUUID())       // maps to pg uuid
    .bind("email", "ops@acme.com")
    .bind("metadata", jsonNode)           // maps to pg jsonb
    .execute();

// InetAddress — maps to pg inet/cidr types
List<String> blockedRanges = handle.createQuery("""
    SELECT host(network) || '/' || masklen(network)
    FROM blocked_networks
    WHERE network >> :addr::inet
    """)
    .bind("addr", InetAddress.getByName("10.0.1.55"))
    .mapTo(String.class)
    .list();

// hstore — maps to Map<String, String>
Map<String, String> settings = handle.createQuery("""
    SELECT settings FROM app_config WHERE env = :env
    """)
    .bind("env", "production")
    .mapTo(new GenericType<Map<String, String>>() {})
    .one();

// Period — maps to pg interval
handle.createUpdate("""
    UPDATE subscriptions
    SET expires_at = now() + :duration
    WHERE user_id = :userId
    """)
    .bind("duration", Period.ofMonths(3))  // becomes '3 months'::interval
    .bind("userId", userId)
    .execute();

Why type fidelity matters for query performance

This is not merely a convenience. Type mismatches between Java and PostgreSQL force implicit casts, which can prevent index usage. Allow me to demonstrate with a concrete example that I have seen — more than once — in production systems.

UUID type mismatch — the performance difference
-- When UUID is sent as varchar (common with raw JDBC):
EXPLAIN ANALYZE
SELECT * FROM users WHERE id = 'a1b2c3d4-e5f6-7890-abcd-ef1234567890';

--  Seq Scan on users  (cost=0.00..18334.00 rows=1 width=152)
--                      (actual time=87.412..87.412 rows=1 loops=1)
--    Filter: ((id)::text = 'a1b2c3d4-e5f6-7890-abcd-ef1234567890'::text)
--    Rows Removed by Filter: 499999
--  Planning Time: 0.089 ms
--  Execution Time: 87.439 ms

-- When UUID is sent as native uuid type (via PostgresPlugin):
EXPLAIN ANALYZE
SELECT * FROM users WHERE id = 'a1b2c3d4-e5f6-7890-abcd-ef1234567890'::uuid;

--  Index Scan using users_pkey on users  (cost=0.42..8.44 rows=1 width=152)
--                                         (actual time=0.028..0.029 rows=1 loops=1)
--    Index Cond: (id = 'a1b2c3d4-e5f6-7890-abcd-ef1234567890'::uuid)
--  Planning Time: 0.073 ms
--  Execution Time: 0.047 ms

-- 87ms vs 0.05ms. The index was there the whole time.
-- The type mismatch prevented PostgreSQL from using it.

87ms versus 0.05ms. A factor of 1,740. The index was there the whole time. The B-tree on the uuid column was perfectly maintained, perfectly healthy. But PostgreSQL could not use it, because the comparison required casting every indexed uuid value to text. The query planner has no choice — if the types do not match, it cannot use the index. It must scan every row.

The PostgresPlugin sends UUIDs as the PostgreSQL uuid type. The column type matches. The index works. The query takes 0.05ms instead of 87ms. This is the kind of improvement that seems impossible until you understand the cause, and then seems obvious in retrospect.

The same principle applies to inet types for IP range queries, interval types for date arithmetic, and jsonb for GIN-indexed containment queries. Each type mismatch is a potential sequential scan disguised as an indexed query. The PostgresPlugin eliminates the entire class.

EXPLAIN ANALYZE: seeing what your queries actually do

I should like to make a point that is foundational to this entire discussion: if you cannot explain a query, you cannot optimize it. And if you cannot find the query, you cannot explain it.

With JDBI, the query in your source code is the query that runs against PostgreSQL. There is no translation layer. No HQL-to-SQL compilation. No Criteria API object graph being serialized into SQL at runtime. The @SqlQuery annotation contains SQL. That SQL goes to PostgreSQL. EXPLAIN ANALYZE on that SQL shows you exactly what your application is doing.

EXPLAIN ANALYZE — the query from your DAO, as PostgreSQL sees it
-- JDBI sends exactly what you write. So EXPLAIN ANALYZE
-- shows exactly what your application does.

-- This is the query from OrderDao.findByStatus("pending", 100):
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.id, o.status, o.total, o.created_at,
       c.name AS customer_name
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.status = 'pending'
ORDER BY o.created_at DESC
LIMIT 100;

--  Limit  (cost=0.72..312.45 rows=100 width=64)
--         (actual time=0.041..0.387 rows=100 loops=1)
--    Buffers: shared hit=142
--    ->  Nested Loop  (cost=0.72..15623.89 rows=5012 width=64)
--                     (actual time=0.040..0.379 rows=100 loops=1)
--          Buffers: shared hit=142
--          ->  Index Scan Backward using orders_created_at_idx on orders o
--              (cost=0.42..8934.22 rows=5012 width=36)
--              (actual time=0.024..0.087 rows=117 loops=1)
--                Filter: ((status)::text = 'pending'::text)
--                Rows Removed by Filter: 14
--                Buffers: shared hit=9
--          ->  Index Scan using customers_pkey on customers c
--              (cost=0.29..1.33 rows=1 width=36)
--              (actual time=0.002..0.002 rows=1 loops=117)
--                Index Cond: (id = o.customer_id)
--                Buffers: shared hit=133
--  Planning Time: 0.284 ms
--  Execution Time: 0.421 ms

Several things worth noticing in this plan. The Nested Loop join between orders and customers is efficient — the inner loop uses the primary key index on customers, costing about 1 buffer hit per lookup. The outer scan uses orders_created_at_idx in backward direction for the ORDER BY ... DESC, and filters on status = 'pending' post-index-scan. Seventeen rows were removed by the filter. The LIMIT 100 stops the scan early. Total: 142 buffer hits, 0.42ms. Acceptable.

But if you will permit me, I should like to improve it.

After a partial index — the same query, tighter
-- After adding a partial index on status:
CREATE INDEX CONCURRENTLY orders_pending_created_at_idx
    ON orders (created_at DESC)
    WHERE status = 'pending';

-- Same query, dramatically tighter plan:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.id, o.status, o.total, o.created_at,
       c.name AS customer_name
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.status = 'pending'
ORDER BY o.created_at DESC
LIMIT 100;

--  Limit  (cost=0.72..187.23 rows=100 width=64)
--         (actual time=0.029..0.198 rows=100 loops=1)
--    Buffers: shared hit=109
--    ->  Nested Loop  (cost=0.72..9361.78 rows=5012 width=64)
--                     (actual time=0.028..0.192 rows=100 loops=1)
--          Buffers: shared hit=109
--          ->  Index Scan using orders_pending_created_at_idx on orders o
--              (cost=0.42..2145.33 rows=5012 width=36)
--              (actual time=0.016..0.034 rows=100 loops=1)
--                Buffers: shared hit=4
--          ->  Index Scan using customers_pkey on customers c
--              (cost=0.29..1.33 rows=1 width=36)
--              (actual time=0.001..0.001 rows=1 loops=100)
--                Index Cond: (id = o.customer_id)
--                Buffers: shared hit=105
--  Planning Time: 0.312 ms
--  Execution Time: 0.224 ms

-- 0.421ms -> 0.224ms. Buffers: 142 -> 109.
-- The partial index covers only 'pending' orders (~1% of table),
-- so it's tiny, warm in cache, never scans completed orders.

The partial index WHERE status = 'pending' contains only pending orders. For a table where 98% of orders are completed, the partial index is roughly 50 times smaller than a full index on created_at. Fewer pages to read. Warmer in cache. No wasted I/O scanning completed orders that will never match.

The important observation is not the specific improvement — it is the process. You find a slow query by searching your Java codebase for the SQL string. You copy the SQL. You run EXPLAIN ANALYZE. You see the plan. You create an index. You verify the improvement. At no point do you need to reverse-engineer what SQL Hibernate generated, reconstruct the session state that produced it, or wonder whether the second-level cache is changing the query pattern.

The SQL is the source of truth. EXPLAIN ANALYZE is the diagnostic tool. JDBI keeps these two things connected. That connection is, in my view, the most valuable property of the entire library.

Dynamic queries: building SQL without a Criteria API

A common objection to JDBI is that it cannot handle dynamic queries — those where the WHERE clause changes based on user input. Hibernate's Criteria API and Spring Data JPA's Specifications exist precisely for this use case.

JDBI handles this with template engines. The built-in StringTemplateEngine supports conditional blocks that include or exclude SQL fragments based on defined variables.

Dynamic queries with JDBI's StringTemplateEngine
// Dynamic queries in JDBI — no Criteria API needed

public List<Order> searchOrders(OrderSearchCriteria criteria) {
    return jdbi.withHandle(handle -> {
        var query = handle.createQuery("""
            SELECT o.id, o.status, o.total, o.created_at
            FROM orders o
            WHERE 1=1
            <if(hasStatus)> AND o.status = :status </if(hasStatus)>
            <if(hasMinTotal)> AND o.total >= :minTotal </if(hasMinTotal)>
            <if(hasRegion)> AND o.region = :region </if(hasRegion)>
            <if(hasSince)> AND o.created_at >= :since </if(hasSince)>
            ORDER BY o.created_at DESC
            LIMIT :limit
            """)
            .setTemplateEngine(new StringTemplateEngine())
            .define("hasStatus", criteria.getStatus() != null)
            .define("hasMinTotal", criteria.getMinTotal() != null)
            .define("hasRegion", criteria.getRegion() != null)
            .define("hasSince", criteria.getSince() != null)
            .bind("limit", criteria.getLimit());

        if (criteria.getStatus() != null)
            query.bind("status", criteria.getStatus());
        if (criteria.getMinTotal() != null)
            query.bind("minTotal", criteria.getMinTotal());
        if (criteria.getRegion() != null)
            query.bind("region", criteria.getRegion());
        if (criteria.getSince() != null)
            query.bind("since", criteria.getSince());

        return query.mapTo(Order.class).list();
    });
}

// Each filter combination produces a different SQL string.
// In Hibernate, each goes into the QueryPlanCache.
// In JDBI, each is just a string sent to PostgreSQL.

I should be honest about the trade-off. The Criteria API provides compile-time safety for dynamic queries — rename a field and the Criteria query breaks at compile time. JDBI's template approach is a string, and strings do not break until runtime. This is a genuine disadvantage. jOOQ addresses it more elegantly with its generated DSL.

The counterpoint is that Hibernate's Criteria queries go through the QueryPlanCache, and dynamic queries with varying filter combinations produce combinatorial cache entries. Eight optional filters means 256 possible combinations, each a unique cache key. The Specification pattern in Spring Data JPA interacts poorly with variable-length IN clauses, producing the same QueryPlanCache explosion via a more insidious mechanism — three levels of indirection between your Java code and the SQL that runs.

JDBI's string-based approach produces no cache entries on the Java side. Each query is a fresh string sent to PostgreSQL, where the database's own plan cache handles deduplication efficiently.

Streaming and cursors: useStream() and ResultIterator

Loading a million rows into a List is a reliable way to produce an OutOfMemoryError. This seems obvious stated plainly, but it is precisely what query.getResultList() does in JPA, and it is the default behavior of every ORM's "find all" method.

Hibernate offers ScrollableResults for cursor-based access, but it requires an open Session for the duration of iteration and manual cleanup. Forget to close the ScrollableResults and you leak a database connection. Close the Session too early and you get — I regret to report — LazyInitializationException.

JDBI provides two streaming mechanisms, both with clean resource management.

useStream(): functional, auto-closing

useStream() passes a Java Stream backed by a live database cursor into a lambda. When the lambda completes — normally or via exception — the stream, the ResultSet, and the Statement are closed automatically. You never touch a ResultSet. You never worry about cleanup.

Note the setFetchSize(500) call. By default, the PostgreSQL JDBC driver fetches all rows at once. Setting the fetch size tells the driver to use a server-side cursor, fetching rows in batches. Memory usage becomes constant regardless of result set size. For a 10-million-row export, the application uses the same heap as a 10-thousand-row export.

useStream() — streaming with automatic resource management
// Streaming large result sets with useStream()
// No buffering. No OutOfMemoryError on million-row exports.

jdbi.useHandle(handle -> {
    handle.createQuery("""
        SELECT id, email, signup_date, total_spend
        FROM customers
        WHERE region = :region
        ORDER BY signup_date
        """)
        .bind("region", "us-east")
        .setFetchSize(500)  // fetch 500 rows at a time from PG
        .mapTo(Customer.class)
        .useStream(stream -> {
            stream
                .filter(c -> c.getTotalSpend().compareTo(BigDecimal.ZERO) > 0)
                .map(CsvFormatter::toCsvLine)
                .forEach(csvWriter::writeLine);
        });
    // The stream is backed by a live ResultSet cursor.
    // Rows arrive in batches of 500. Memory stays flat
    // whether the table has 10,000 rows or 10,000,000.
});

// Compare to Hibernate:
// session.createQuery("FROM Customer WHERE region = :r")
//     .setParameter("r", "us-east")
//     .getResultList();  // <-- loads ALL rows into memory
//
// ScrollableResults exists but requires manual resource management
// and an open Session for the entire scroll duration.
// Close the Session too early? LazyInitializationException.
// Forget to close the ScrollableResults? Connection leak.

ResultIterator: pull-based, explicit lifecycle

When you need finer control — batch processing, conditional termination, progress logging — ResultIterator implements Closeable and works with try-with-resources. You pull rows one at a time, process them in chunks, and the iterator handles cleanup when the block exits.

ResultIterator — chunked batch processing
// ResultIterator — pull-based consumption with explicit lifecycle
try (ResultIterator<AuditEvent> events = handle.createQuery("""
        SELECT event_id, user_id, action, payload, created_at
        FROM audit_log
        WHERE created_at >= :since
        ORDER BY created_at
        """)
        .bind("since", Instant.now().minus(Duration.ofDays(30)))
        .setFetchSize(1000)
        .mapTo(AuditEvent.class)
        .iterator()) {

    int batch = 0;
    List<AuditEvent> chunk = new ArrayList<>(1000);

    while (events.hasNext()) {
        chunk.add(events.next());
        if (chunk.size() == 1000) {
            elasticClient.bulkIndex("audit", chunk);
            chunk.clear();
            batch++;
            log.info("Indexed batch {} ({}k events)", batch, batch);
        }
    }

    if (!chunk.isEmpty()) {
        elasticClient.bulkIndex("audit", chunk);
    }
}
// ResultIterator implements Closeable.
// try-with-resources guarantees the underlying ResultSet
// and PreparedStatement are closed, even if bulkIndex() throws.
// No session. No persistence context. No leak.

Both patterns work because JDBI does not have a session that might close underneath you. The Handle holds the connection. The stream or iterator holds the cursor. When you are done, both are released. The resource model is JDBC's own model, made ergonomic.

For data pipeline workloads — ETL jobs, report generation, bulk exports, audit log processing — this combination of cursor-based streaming and explicit resource management is difficult to improve upon. The application processes rows at the speed PostgreSQL can deliver them, without buffering, without a persistence context accumulating managed entities, and without the risk of a session closure interrupting the stream.

The COPY protocol: bulk loading at wire speed

The PostgreSQL COPY protocol is a binary streaming protocol purpose-built for bulk data transfer. It bypasses the SQL parser entirely and streams rows directly into the table's storage layer. For bulk loads, it is typically 8-50 times faster than batched INSERT statements.

No Java persistence library wraps COPY natively — not Hibernate, not jOOQ, not JDBI. The protocol is too PostgreSQL-specific. But the ease with which you access COPY reveals the philosophical difference between these libraries.

In Hibernate, getting the raw JDBC connection requires:

  1. Unwrapping the Session to get the SessionImplementor.
  2. Calling connection() to get the JDBC Connection — which Hibernate may or may not release depending on the connection release mode.
  3. Unwrapping the JDBC connection to get the PGConnection.
  4. Hoping the connection has not been released back to the pool during the operation.

In JDBI, the connection is right there on the Handle. One method call: handle.getConnection(). No unwrapping chain. No lifecycle concerns. The connection stays open for the duration of the Handle's scope, and you control that scope explicitly.

COPY protocol via JDBI's handle
// PostgreSQL COPY protocol — the fastest way to bulk load data
// JDBI doesn't wrap COPY directly, but its handle gives you
// the raw JDBC connection when you need it

jdbi.useHandle(handle -> {
    // Unwrap the PostgreSQL connection
    PGConnection pgConn = handle.getConnection()
        .unwrap(PGConnection.class);

    CopyManager copyManager = pgConn.getCopyAPI();

    // COPY IN — bulk load from CSV at wire-protocol speed
    String copySql = "COPY staging_events (event_id, user_id, action, ts) "
                   + "FROM STDIN WITH (FORMAT csv, HEADER true)";

    try (InputStream csvStream = Files.newInputStream(csvPath)) {
        long rowsCopied = copyManager.copyIn(copySql, csvStream);
        log.info("Loaded {} rows via COPY", rowsCopied);
    }

    // COPY OUT — bulk export without ORM serialization overhead
    String exportSql = "COPY (SELECT id, email, created_at FROM users "
                     + "WHERE created_at >= '2025-01-01') "
                     + "TO STDOUT WITH (FORMAT csv, HEADER true)";

    try (OutputStream out = Files.newOutputStream(exportPath)) {
        long rowsExported = copyManager.copyOut(exportSql, out);
        log.info("Exported {} rows via COPY", rowsExported);
    }
});

// Why COPY matters:
// INSERT 100,000 rows one at a time:  ~45 seconds
// INSERT 100,000 rows via batch:      ~8 seconds
// COPY 100,000 rows:                  ~0.9 seconds
//
// COPY uses a binary streaming protocol that bypasses the
// SQL parser entirely. It is not an optimization. It is a
// different protocol. And JDBI's thin abstraction over JDBC
// means you can reach it whenever you need it.

The performance difference is dramatic. For a data pipeline that loads 100,000 events per batch, COPY reduces the insert time from 8 seconds (batched inserts) to under 1 second. That is not a percentage improvement. That is a category change. The Spring Batch + COPY integration is also possible but requires considerably more ceremony — a custom ItemWriter, connection unwrapping through Spring's transaction infrastructure, and careful transaction boundary management. JDBI's useHandle lambda makes the same operation five lines of setup.

"Most application developers interact with PostgreSQL through an ORM that exposes perhaps 10-15% of what PostgreSQL actually offers. The remaining 85% includes features that eliminate the need for much of the infrastructure complexity that modern applications carry."

— from You Don't Need Redis, Chapter 1: Good Evening. We Have a Problem.

LISTEN/NOTIFY and advisory locks: PostgreSQL features that ORMs hide

PostgreSQL is not a database. It is an ecosystem that most teams use as a database. Beyond tables, indexes, and queries, PostgreSQL provides application-level primitives that can replace external services entirely. Two of the most valuable are LISTEN/NOTIFY for real-time events and advisory locks for distributed coordination.

JDBI's thin abstraction over JDBC makes both accessible without fighting the persistence layer.

LISTEN/NOTIFY: event streaming without a message broker

If you need to react to database changes in real time — order status updates, inventory changes, user signups — the typical approach is a message broker: RabbitMQ, Kafka, or SQS. Each brings deployment, monitoring, dead-letter queues, message serialization, consumer group management.

PostgreSQL's LISTEN/NOTIFY provides the same functionality for workloads where the message rate is moderate (hundreds per second rather than millions) and the consumers are database-connected application instances.

LISTEN/NOTIFY — real-time events via PostgreSQL
// PostgreSQL LISTEN/NOTIFY — real-time event streaming
// No polling. No message broker. Built into the database.

jdbi.useHandle(handle -> {
    PGConnection pgConn = handle.getConnection()
        .unwrap(PGConnection.class);

    handle.execute("LISTEN order_status_changes");

    // Poll for notifications (in a service loop)
    while (running) {
        PGNotification[] notifications = pgConn.getNotifications(5000);
        if (notifications != null) {
            for (PGNotification n : notifications) {
                OrderStatusEvent event = mapper.readValue(
                    n.getParameter(), OrderStatusEvent.class);
                eventHandler.handle(event);
            }
        }
    }
});

// The trigger that fires notifications:
-- CREATE OR REPLACE FUNCTION notify_order_status()
-- RETURNS TRIGGER AS $$
-- BEGIN
--   PERFORM pg_notify('order_status_changes',
--     json_build_object(
--       'order_id', NEW.id,
--       'old_status', OLD.status,
--       'new_status', NEW.status,
--       'changed_at', now()
--     )::text
--   );
--   RETURN NEW;
-- END;
-- $$ LANGUAGE plpgsql;

I should be forthcoming about the limits. LISTEN/NOTIFY is not Kafka. It has no persistence — if no listener is connected when a notification fires, the notification is lost. It has no replay. It has no consumer groups. For high-throughput event streaming, Kafka is the correct tool. But for "notify the application when an order changes status" — which describes 80% of the event-driven use cases I encounter — LISTEN/NOTIFY eliminates an entire infrastructure dependency.

Advisory locks: distributed coordination without Redis

When you need to ensure that only one application instance runs a particular task — a daily report, a scheduled cleanup, a migration — the typical approach is a distributed lock via Redis, ZooKeeper, or etcd. Each brings operational overhead and failure modes. Redis SETNX locks can deadlock if the holder crashes. ZooKeeper requires its own cluster.

PostgreSQL advisory locks are transactional, reliable, and already there.

Advisory locks — distributed coordination via PostgreSQL
// PostgreSQL advisory locks via JDBI
// Application-level locking without Redis or ZooKeeper

public class DistributedTaskRunner {
    private final Jdbi jdbi;

    public boolean tryRunExclusive(long taskId, Runnable task) {
        return jdbi.withHandle(handle -> {
            boolean acquired = handle.createQuery(
                "SELECT pg_try_advisory_lock(:taskId)")
                .bind("taskId", taskId)
                .mapTo(Boolean.class)
                .one();

            if (!acquired) {
                log.info("Task {} already running elsewhere", taskId);
                return false;
            }

            try {
                task.run();
                return true;
            } finally {
                handle.execute("SELECT pg_advisory_unlock(:taskId)",
                    taskId);
            }
        });
    }
}

// Ensure only one instance runs a scheduled job:
taskRunner.tryRunExclusive(
    hashCode("daily-report-generation"),
    () -> reportService.generateDailyReport()
);

// No Redis. No ZooKeeper. No etcd.
// The database you already have provides the lock.

The advisory lock lives in PostgreSQL's shared memory. It survives application crashes because the database connection closes, which releases the lock automatically. No additional infrastructure. No new API to learn. A SQL function call.

I raise these features in an article about JDBI because JDBI makes them accessible. Both require access to the raw PostgreSQL connection or the ability to execute PostgreSQL-specific SQL. JDBI provides both. An ORM that abstracts away the database makes these features harder to reach — not impossible, but harder. And "harder" often means "not used, even when they are the right tool."

The onDemand pattern: DAOs without session scope

One of JDBI's most elegant patterns is onDemand(). It creates a DAO implementation that acquires a connection from the pool for each method call and returns it immediately after. No session scope. No "open connection at the start of the request, close it at the end." Each DAO method is an independent database operation with its own connection lifecycle.

onDemand() — connection-per-call DAOs
// onDemand() — DAOs that acquire and release connections per call
OrderDao dao = jdbi.onDemand(OrderDao.class);

// Each method call:
// 1. Borrows a connection from the pool
// 2. Executes the SQL
// 3. Maps the result
// 4. Returns the connection to the pool

List<Order> pending = dao.findByStatus("pending", 100);
// Connection borrowed, query executed, connection returned.

int updated = dao.updateStatus(42L, "shipped");
// Connection borrowed, update executed, connection returned.

// Compare to Hibernate + OSIV:
// - Open Session (borrow connection)
// - Execute query 1
// - Execute query 2
// - Render JSON response
// - Flush (dirty check all managed entities)
// - Commit
// - Close Session (return connection)
//
// Connection held for the entire HTTP request duration.
// 200ms of DB work + 800ms of app work = 1000ms connection hold.
// With onDemand(), only the 200ms of DB work holds a connection.

This has a profound effect on connection pool utilization. In a Hibernate + OSIV application, a connection is held for the entire HTTP request — including template rendering, JSON serialization, and response compression. A request that needs 200ms of database time and 800ms of application time holds the connection for 1,000ms. Under the recommended HikariCP pool sizing formula, you need 5 times more connections than the database work alone would require.

With onDemand(), the connection is held only for the database operation. The 200ms request borrows a connection for 200ms. The pool can be smaller. The database handles fewer concurrent connections. The system scales further before hitting connection limits.

There is a trade-off: if a request needs three queries, onDemand() borrows and returns a connection three times. The pool overhead per acquisition is typically 10-50 microseconds with HikariCP, so three acquisitions add 30-150 microseconds. For most applications, noise. For latency-critical paths needing multiple queries in rapid succession, use withHandle() to share a single connection across operations.

JDBI in Dropwizard: the canonical pairing

JDBI was created by Brian McCallister, who also co-created Dropwizard. The two projects share a philosophy: provide excellent defaults, stay out of the way, and do not accumulate abstractions that users must learn to work around. This shared lineage is not coincidental — it is architectural.

The dropwizard-jdbi3 module integrates JDBI with Dropwizard's connection pool (HikariCP), metrics (Codahale/Micrometer), and health checks. Every SQL query gets automatic timing metrics. Every DAO method produces a timer in your /metrics endpoint. There is no configuration for this. It happens because the integration is designed to happen.

Dropwizard + JDBI — integrated by default
// Dropwizard + JDBI — the canonical pairing
// dropwizard-jdbi3 gives you connection management, metrics,
// health checks, and transaction handling out of the box

public class OrderApplication extends Application<OrderConfig> {

    @Override
    public void initialize(Bootstrap<OrderConfig> bootstrap) {
        bootstrap.addBundle(new JdbiBundle<OrderConfig>() {
            @Override
            public DataSourceFactory getDataSourceFactory(OrderConfig config) {
                return config.getDatabase();
            }

            @Override
            public JdbiFactory getJdbiFactory() {
                return new JdbiFactory();
            }
        });
    }

    @Override
    public void run(OrderConfig config, Environment env) {
        Jdbi jdbi = JdbiBundle.getJdbi();
        jdbi.installPlugin(new PostgresPlugin());
        jdbi.installPlugin(new SqlObjectPlugin());

        OrderDao orderDao = jdbi.onDemand(OrderDao.class);
        env.jersey().register(new OrderResource(orderDao));
    }
}

// The Dropwizard metrics integration means every SQL query
// gets timing metrics automatically:
//   orderDao.findByStatus  — timer, count, p95, p99
//
// No Hibernate statistics XML. No JMX MBean configuration.
// Metrics just appear in your /metrics endpoint.

This is the environment where JDBI thrives. A Dropwizard service with JDBI and PostgreSQL starts in 2-3 seconds, uses 80-120 MB of heap for a typical CRUD application, and produces SQL that is completely predictable — every query visible in the source code, every query visible in pg_stat_statements, every query explainable and optimizable independently.

Observability with Micrometer

If you are not using Dropwizard, JDBI integrates with Micrometer directly via a TimingCollector. Every query gets a timer. Slow queries get logged. The instrumentation is a single plugin registration.

Micrometer integration — observability for every query
// Micrometer + JDBI — observability without ceremony

jdbi.installPlugin(new SqlTimingCollector(meterRegistry));

// Every query gets a Micrometer timer automatically:
//   jdbi.query.OrderDao.findByStatus — p50, p95, p99, max
//   jdbi.query.OrderDao.updateStatus — p50, p95, p99, max
//
// Grafana dashboard query:
//   rate(jdbi_query_seconds_count{dao="OrderDao"}[5m])

// Custom collector that also logs slow queries:
public class AlertingTimingCollector implements TimingCollector {
    @Override
    public void collect(long elapsedNs, StatementContext ctx) {
        Duration elapsed = Duration.ofNanos(elapsedNs);
        String queryName = ctx.getExtensionMethod()
            .map(m -> m.getType().getSimpleName()
                     + "." + m.getMethod().getName())
            .orElse("anonymous");

        registry.timer("jdbi.query", "name", queryName)
            .record(elapsed);

        if (elapsed.toMillis() > 100) {
            log.warn("Slow query: {} took {}ms — SQL: {}",
                queryName, elapsed.toMillis(),
                ctx.getRenderedSql());
        }
    }
}

The slow query logging at the application layer complements PostgreSQL's own log_min_duration_statement. PostgreSQL logs the SQL and timing; the JDBI collector logs the DAO method name and SQL together. When a Grafana alert fires for elevated p99 latency on OrderDao.findByStatus, you know exactly which method to open, which SQL to EXPLAIN, and which index to consider. The distance between the alert and the fix is one file, one method, one query.

Transactions without surprises

Hibernate's transaction model is deeply intertwined with its persistence context. Committing a transaction triggers a flush. The flush triggers dirty checking. Dirty checking may generate UPDATE statements you did not write. If two entities reference the same row and one has been modified, the flush resolves the conflict — sometimes in ways that are difficult to predict without understanding the flush order algorithm.

Spring's @Transactional annotation adds another layer: proxy-based interception. If you call a @Transactional method from within the same class, Spring's proxy does not intercept the call, and the transaction annotation has no effect. This is a well-known limitation, thoroughly documented in the Spring documentation, and still the source of production bugs in 2026. The symptom is subtle: the method runs, the data appears saved, but the transaction boundary was never established, so a partial failure leaves the database inconsistent.

JDBI transactions are explicit.

JDBI transactions — explicit and composable
// Transactions in JDBI — explicit, composable, no surprises

// Simple transaction
jdbi.useTransaction(handle -> {
    long orderId = handle.createUpdate("""
        INSERT INTO orders (customer_id, status, total)
        VALUES (:customerId, 'pending', :total)
        """)
        .bind("customerId", customerId)
        .bind("total", total)
        .executeAndReturnGeneratedKeys("id")
        .mapTo(Long.class)
        .one();

    handle.prepareBatch("""
        INSERT INTO order_items (order_id, product_id, quantity, price)
        VALUES (:orderId, :productId, :quantity, :price)
        """)
        .bindBean(items.stream()
            .peek(item -> item.setOrderId(orderId))
            .toList())
        .execute();

    // If anything throws, the transaction rolls back.
    // No @Transactional annotation magic.
    // No proxy-based interception that silently swallows exceptions.
    // No "the transaction committed because you called the method
    //  from within the same class and Spring couldn't intercept it."
});

// Savepoints — because sometimes you want partial rollback
jdbi.useTransaction(handle -> {
    handle.savepoint("before_enrichment");
    try {
        enrichmentService.enrich(handle, orderId);
    } catch (EnrichmentException e) {
        handle.rollbackToSavepoint("before_enrichment");
        log.warn("Enrichment failed, proceeding without it");
    }
    // The outer transaction continues regardless
});

You open a transaction. You execute SQL. If anything throws, the transaction rolls back. If nothing throws, it commits. There is no flush. There is no dirty check. There is no proxy. There are no generated SQL statements you did not write. The transaction boundary is a lambda scope, visible in the code, following normal Java control flow.

Savepoints work exactly as PostgreSQL implements them. You set a savepoint, attempt an operation, and roll back to the savepoint if it fails — without aborting the outer transaction. This is difficult to achieve in Hibernate because rolling back to a savepoint does not clear the entities from the persistence context loaded after the savepoint, leading to stale state. The Hibernate Session and the database transaction become desynchronized, and the behavior from that point is unpredictable.

In JDBI, there is no persistence context to desynchronize. The savepoint rolls back the database state. The Java variables hold whatever they held before the rollback. The two are independent, and their independence is the feature.

Testing with JDBI: real database, real queries

A common Hibernate testing pattern is to run tests against an in-memory H2 database. This is fast and requires no external infrastructure. It is also, if you will forgive my directness, a lie. H2 does not support PostgreSQL's jsonb operators. It does not support LATERAL JOIN. It does not support partial indexes, expression indexes, or GIN indexes. It does not support LISTEN/NOTIFY, advisory locks, or COPY. A test that passes on H2 tells you that your code works on H2.

JDBI encourages testing against real PostgreSQL, and its testing infrastructure makes it straightforward.

JDBI testing — against real PostgreSQL
// Testing with JDBI — real PostgreSQL, not in-memory H2

@ExtendWith(JdbiExtension.class)
class OrderDaoTest {

    @Test
    void findByStatus_returnsPendingOrders(Jdbi jdbi) {
        jdbi.useHandle(handle -> {
            handle.execute("""
                INSERT INTO orders (id, customer_id, status, total)
                VALUES (1, 100, 'pending', 49.99),
                       (2, 100, 'shipped', 29.99),
                       (3, 200, 'pending', 99.99)
                """);

            OrderDao dao = handle.attach(OrderDao.class);
            List<Order> pending = dao.findByStatus("pending", 100);

            assertThat(pending).hasSize(2);
            assertThat(pending).extracting(Order::getStatus)
                .containsOnly("pending");
        });
    }
}

// The test runs against real PostgreSQL.
// Same SQL. Same types. Same query plans.
// No "passes on H2, fails on Postgres" surprises.
//
// Each test runs in a transaction that rolls back automatically.
// No cleanup. No test data leaking between tests.

Each test runs in a transaction that rolls back at the end, so test data does not leak between tests and there is no cleanup to manage. The database server is the same PostgreSQL you run in production — or a Testcontainers instance that starts an identical PostgreSQL version in Docker.

I have lost count of the number of times I have seen a team discover — in production — that their H2-tested Hibernate query does not work on PostgreSQL because H2 parses a JOIN differently, handles NULL differently in GROUP BY, or does not support a function the ORM generated SQL against. JDBI's approach eliminates this category of surprise. The test environment and the production environment run the same database. The queries are the same. The types are the same. The plans are the same.

How JDBI compares: the honest table

JDBI is not better than Hibernate or jOOQ in every dimension. It is better in specific dimensions that matter to specific teams. A waiter who overstates his case is no waiter at all, so here is where each tool earns its place — and where it does not.

FeatureHibernatejOOQJDBI
Session cache / identity mapYes (1st + 2nd level)NoNo
Dirty checking / change trackingYesOptional (UpdatableRecord)No
Lazy loading proxiesYesNoNo
QueryPlanCacheYes (unbounded by default)NoNo
OSIV (Open Session in View)Common patternN/AN/A
You write the SQLNo (HQL/JPQL/Criteria)DSL generates SQLYes
Native PG types (uuid, inet, hstore)Via custom UserTypeVia codegen + converterVia PostgresPlugin
Streaming / cursor-based resultsScrollableResults (manual)fetchStream() / fetchLazy()useStream() / ResultIterator
COPY protocol accessUnwrap connectionUnwrap connectionUnwrap connection
Startup time (typical app)3-15 seconds< 1 second< 100ms
JAR size (core)~7 MB~4 MB~800 KB
SQL visibility to proxy/optimizerLow (dynamic, cached plans)High (generated, predictable)High (hand-written, stable)
Compile-time SQL validationNo (HQL is runtime-parsed)Yes (generated DSL)No (SQL strings)
Database portabilityHigh (HQL abstracts dialect)Medium (per-dialect codegen)Low (raw SQL, by design)

A few things to notice. jOOQ and JDBI share the "no hidden state" property — neither maintains a session cache, identity map, or lazy proxies. The difference is that jOOQ generates SQL from a type-safe DSL (compile-time safety at the cost of a code generation step), while JDBI lets you write SQL directly (full PostgreSQL dialect access at the cost of runtime-only validation). The jOOQ MULTISET article on this site explores jOOQ's strengths in depth if you are evaluating between the two.

Hibernate wins when your domain model is complex, your team wants to think in objects rather than tables, and you are willing to invest in understanding the persistence context lifecycle. It is genuinely powerful software. The question is whether the power is proportional to the cost for your specific application.

For applications that are fundamentally "send SQL to PostgreSQL, get results back, serve them over HTTP" — which describes a large number of services — JDBI provides everything you need with nothing you must manage.

I should also note that the "database portability" row is not uniformly a disadvantage. If you have committed to PostgreSQL — and the partial indexes, native types, COPY protocol, advisory locks, and LISTEN/NOTIFY features we have discussed are only available on PostgreSQL — then portability is not a feature you need. It is a feature you are paying for with reduced access to the database you actually use.

Why JDBI and Gold Lapel pair well together

I should be straightforward about this, since it is the part where I mention the household I serve.

Gold Lapel is a PostgreSQL proxy that observes queries in real time and applies optimizations automatically — creating indexes for sequential scans, materializing expensive joins, rewriting suboptimal patterns. It works by analyzing the SQL that arrives at the proxy and matching it against the actual execution behavior inside PostgreSQL.

This analysis works best when the SQL is predictable, stable, and visible. And this is where the choice of persistence layer matters enormously.

Hibernate generates SQL dynamically. The same Java code can produce different SQL depending on which entities are in the session cache, which fetch strategies are active, whether a batch is being assembled, and whether the second-level cache returned a hit. The SQL that Gold Lapel sees may not be the SQL you expect, because you did not write it. Hibernate did, based on its internal state at that moment. Gold Lapel can still optimize Hibernate-generated SQL — it does, for many users — but SQL stability helps, and Hibernate's SQL is inherently variable.

JDBI sends the SQL you wrote. The query that Gold Lapel observes is the query in your source code. If Gold Lapel creates an index for a slow query, you can find that query with a text search of your codebase. If Gold Lapel reports that a particular query pattern accounts for 40% of database time, you can open the DAO interface, find the @SqlQuery annotation, and read the exact SQL running.

There is no translation layer. No generated query to reverse-engineer. No session state to reconstruct. The SQL is the source of truth, and both JDBI and Gold Lapel treat it that way.

The combination is particularly effective for Dropwizard services. Dropwizard's metrics tell you which DAO methods are slow. Gold Lapel tells you which queries are slow and what optimizations would help. JDBI ensures that the DAO method and the query are the same thing, so the metrics and optimizations point to the same place in your code. The entire observability stack converges on a single SQL string in a single Java file.

The honest counterpoint: where JDBI costs you

I have spent considerable time discussing what JDBI gets right. It would be a disservice — to you and to this article's credibility — to stop there. JDBI has real limitations, and they matter.

No compile-time SQL validation

JDBI's SQL is a string. If you misspell a column name, you discover it at runtime. If you reference a table renamed in a migration, runtime. If your parameter types do not match column types, runtime. jOOQ catches all of these at compile time, because its generated DSL knows the schema. This is not a small advantage — it is the single strongest argument for jOOQ over JDBI.

The mitigation is testing against real PostgreSQL, but testing catches errors at test time, not compile time. The feedback loop is longer. For large codebases with hundreds of queries, this gap is noticeable.

No schema evolution tracking

When you rename a column in a migration, Hibernate fails at startup — loudly, clearly, with a mapping exception that tells you exactly which field does not match. JDBI will not fail until the query executes. Finding all queries affected by a column rename requires a text search across all SQL strings. Manageable but tedious. jOOQ catches it at build time. Hibernate catches it at startup. JDBI catches it at runtime.

No transparent relation handling

If your domain model has deep object graphs — an Order with OrderItems, each with a Product, each with a Category — Hibernate navigates this transparently. You say order.getItems().get(0).getProduct().getCategory() and Hibernate loads each relation as needed. With JDBI, you write the SQL for every level. For genuinely complex models with 5-6 levels of nesting, this is real work.

The counterargument is that transparent navigation often produces N+1 queries, and explicit SQL produces predictable execution. But there are applications where convenience outweighs performance cost, and JDBI does not serve those well.

Smaller community, fewer resources

Hibernate has been the default Java persistence layer for twenty years. The volume of blog posts, conference talks, Stack Overflow answers, and training courses is enormous. JDBI's community is smaller, its documentation good but not exhaustive, and finding answers to edge cases sometimes requires reading the source code. For teams that depend on community resources, this is a real consideration.

These are not fatal limitations. They are trade-offs. Every tool has them. The question is which set matches your application and your team. For teams comfortable with SQL, committed to PostgreSQL, and building services where the persistence layer should be invisible, JDBI's trade-offs resolve favorably. For teams building complex domain models that genuinely benefit from transparent persistence, Hibernate's resolve favorably. Both are correct answers to different questions.

When to choose JDBI, and when not to

Allow me to be specific, because generalities are the refuge of those who have not formed an opinion.

JDBI is the right choice when:

  • Your team is comfortable writing SQL and considers it a feature, not a chore. This is the foundational requirement. If writing SQL feels like a tax, JDBI will feel like paying that tax on every operation.
  • You are building services that are primarily read-heavy, where change tracking provides no value. Most microservices, most API backends, most data pipeline workers fall into this category.
  • You need access to PostgreSQL-specific features — jsonb, COPY, advisory locks, LISTEN/NOTIFY, partial indexes, expression indexes, custom types — without fighting an abstraction layer.
  • You value fast startup times. JDBI adds effectively zero to your application's boot time. For serverless-adjacent deployments, scheduled batch jobs, and CLI tools, this matters.
  • You are using Dropwizard, where JDBI integration is first-class and automatic.
  • You have experienced Hibernate's failure modes in production and have decided that the problems it solves are not problems you have.
  • You want the SQL in your source code to be the SQL that runs against your database, with no translation, no generation, and no cache between them.

JDBI is not the right choice when:

  • Your application has a complex domain model with deep object graphs that genuinely benefit from transparent persistence. "Genuinely" is the operative word — many applications have models that look complex in Java but flatten to three or four tables.
  • You need database portability — JDBI encourages PostgreSQL-specific SQL, which is a strength until you need MySQL or Oracle.
  • You want compile-time SQL validation. jOOQ is the better choice, with its generated DSL catching typos and type mismatches at build time.
  • Your team prefers not to write SQL. This is a valid preference. Not every developer enjoys SQL. JDBI does not accommodate this, and using JDBI without SQL comfort produces worse results than using Hibernate without Hibernate expertise.

For the teams that choose it — particularly the Dropwizard ecosystem, where JDBI is the default persistence layer — the result is a stack with remarkably few moving parts. A DataSource, a JDBI instance, a PostgresPlugin, and your SQL. The database does what the database does well. The application code does what application code does well. Nothing in between is trying to be clever.

And if you are looking for something to be clever on your behalf — to observe your queries, identify missing indexes, and apply optimizations without changing your code — Gold Lapel does that at the PostgreSQL layer. Where the query plans live. Where the indexes matter. Where the optimizations actually change the execution path. The SQL arrives from JDBI, stable and predictable. Gold Lapel reads it, understands it, and ensures PostgreSQL handles it as efficiently as possible.

A well-run household does not require elaborate machinery between the instruction and the execution. A clear instruction, competently delivered, is sufficient. JDBI delivers the instruction. PostgreSQL executes it. And if the execution could be improved — well, that is what the staff are for.

Frequently asked questions

Terms referenced in this article

JDBI's willingness to stay close to SQL makes it an excellent candidate for the kind of query-level optimization I discuss in the PostgreSQL performance tuning guide. The techniques there apply with particular directness when your persistence layer is not rewriting your queries behind your back.