Spring Boot's open-in-view Default: The Silent Path to HikariCP Pool Exhaustion on PostgreSQL
Your connection pool is not too small. Your connections are being held hostage by a default you never chose.
Good evening. Allow me to draw your attention to a warning you have been dismissing for years.
Every Spring Boot application that uses JPA prints a warning at startup. It appears once, in yellow, between a dozen other log lines about Tomcat connector initialization and bean post-processing. It looks like this:
// You have seen this in your Spring Boot startup logs.
// Most of us scroll past it. That is about to change.
2026-03-05 09:14:02.341 WARN 1 --- [main] JpaBaseConfiguration$JpaWebConfiguration :
spring.jpa.open-in-view is enabled by default.
Therefore, database queries may be performed during view rendering.
Explicitly configure spring.jpa.open-in-view to disable this warning.
// Spring Boot has been warning you since version 2.0 (March 2018).
// It is the only default that ships with a built-in apology. That warning is about a feature called Open Session in View, abbreviated OSIV. It has been enabled by default in Spring Boot since version 1.0. It remains enabled by default today. The warning was added in Spring Boot 2.0, which tells you something about how long the community has known this default is problematic — and how deeply entrenched it is.
Here is what the warning does not tell you: OSIV holds a JDBC connection — one of HikariCP's precious 10 default connections — for the entire duration of every HTTP request. Not just during database queries. During external API calls. During JSON serialization. During template rendering. During everything.
If your controller calls Stripe's payment API and that call takes 800 milliseconds, a PostgreSQL connection sits idle for 800 milliseconds doing absolutely nothing. With 10 connections in the pool, you can sustain approximately 12 concurrent requests before every connection is occupied by an idle-in-transaction session and the 13th request blocks for 30 seconds, then fails.
This is not a theoretical concern. It is the single most common production issue in Spring Boot applications backed by PostgreSQL. I have seen it in startups with three engineers and in enterprises with three hundred. The pattern is always the same: the application works in development, works in staging, and fails during the first real traffic spike. The team increases the pool size. The problem returns at the next traffic threshold. The cycle repeats until someone reads an article like this one — or until the database falls over entirely.
If you will permit me, I should like to explain precisely what OSIV does, why it causes pool exhaustion, how to diagnose it in your running application, and how to remove it without setting your codebase on fire. The last part requires some care. But the alternative — holding database connections hostage during HTTP calls indefinitely — is not a posture that a well-run household can sustain.
How OSIV works: the mechanism behind the exhaustion
OSIV is implemented by the OpenEntityManagerInViewInterceptor (or its servlet filter counterpart, OpenEntityManagerInViewFilter). It registers as a Spring MVC interceptor that runs before and after every controller method.
// What open-in-view=true actually does:
//
// 1. OpenEntityManagerInViewInterceptor registers as a Spring MVC interceptor
// 2. Before the controller method runs, it opens an EntityManager (and a Session)
// 3. That Session binds to the current thread via a TransactionSynchronizationManager
// 4. The Session holds a JDBC connection (from HikariCP)
// 5. The connection is held until the HTTP response is FULLY rendered and sent
//
// Timeline for a typical request:
//
// |--- Controller ---|--- Service ---|--- External API ---|--- View ---|
// t=0ms t=5ms t=8ms t=2008ms t=2015ms
// ^ ^
// |-- JDBC connection acquired here returned here ----|
// | |
// | Connection held for 2,015ms |
// | Actual database work: ~10ms |
// | Time spent idle in transaction: ~2,000ms |
//
// The connection is not "leaked." It is doing exactly what OSIV asked.
// The problem is that OSIV asks for something unreasonable. The original intent is reasonable. JPA entities use lazy loading for associations — a @OneToMany collection is not fetched from the database until you call .getItems(). Without OSIV, if your controller returns an entity to a view template, and that template accesses a lazy collection, Hibernate throws a LazyInitializationException because the Session (and its JDBC connection) was already closed when the @Transactional service method returned.
OSIV solves this by keeping the Session open until the response is fully rendered. Lazy collections can be loaded at any point during the request lifecycle without explicit fetching in the service layer. It is a convenience feature. It trades connection efficiency for developer convenience.
The trade-off was reasonable in 2005, when Spring applications rendered server-side templates and rarely called external APIs from controllers. It is not reasonable in 2026, when every controller method orchestrates three microservices, two third-party APIs, and a message queue before returning a JSON response.
Inside the interceptor: what Spring is actually doing to your connections
If you would like to understand the mechanism at a source-code level — and I believe you should, because debugging OSIV requires knowing where the connection is acquired and where it is released — here is the essential logic of Spring's OpenEntityManagerInViewInterceptor.
// Simplified from Spring's actual OpenEntityManagerInViewInterceptor.
// The real source is in spring-orm, ~80 lines. This is the essence.
public class OpenEntityManagerInViewInterceptor implements AsyncWebRequestInterceptor {
@Override
public void preHandle(WebRequest request) {
// Creates a new EntityManager (which wraps a Hibernate Session)
EntityManagerFactory emf = obtainEntityManagerFactory();
EntityManager em = createEntityManager();
// Binds it to the current thread — any @Autowired repository
// or @PersistenceContext injection on this thread will use THIS em.
EntityManagerHolder emHolder = new EntityManagerHolder(em);
TransactionSynchronizationManager.bindResource(emf, emHolder);
// The EntityManager is now open. If any code on this thread
// touches a JPA entity, the Session will acquire a JDBC connection
// from HikariCP. That connection will not be returned until
// afterCompletion() runs — after the response is fully sent.
}
@Override
public void afterCompletion(WebRequest request, Exception ex) {
// This runs AFTER the response body has been written to the socket.
EntityManagerHolder emHolder = (EntityManagerHolder)
TransactionSynchronizationManager.unbindResource(obtainEntityManagerFactory());
// NOW the EntityManager closes. NOW the Session closes.
// NOW the JDBC connection returns to HikariCP.
EntityManagerFactoryUtils.closeEntityManager(emHolder.getEntityManager());
}
}
// The gap between preHandle() and afterCompletion() is the entire HTTP
// request lifecycle. Every millisecond of controller logic, service calls,
// external API calls, JSON serialization, and response writing.
// The JDBC connection is held for all of it. The critical observation is the asymmetry. The preHandle method runs before your controller. The afterCompletion method runs after the response is fully written. Between those two points, the EntityManager is bound to the current thread, the Session is open, and any database access triggers a JDBC connection checkout from HikariCP that will not be returned until afterCompletion.
I should note that the connection is not acquired in preHandle itself. The EntityManager is opened, but the JDBC connection is acquired lazily — the first time a query runs. In practice, this distinction rarely matters. The first database access in any JPA-backed controller method happens within the first few milliseconds. Once the connection is acquired, it is held for the remainder of the request lifecycle. The lazy acquisition is a micro-optimization that does not change the fundamental problem.
One detail that catches teams off guard: the afterCompletion method runs after response body writing, not after the controller method returns. If you use streaming responses, chunked transfer encoding, or server-sent events, the connection is held for the entire duration of the stream. A long-polling endpoint with OSIV enabled holds a database connection for as long as the client maintains the HTTP connection. I have seen this consume an entire pool with three concurrent long-poll clients.
The anatomy of an OSIV-exhausted pool
Consider a straightforward order-detail endpoint. It fetches an order from PostgreSQL, checks payment status with Stripe, and calculates shipping estimates with a carrier API.
@RestController
@RequestMapping("/api/orders")
public class OrderController {
@Autowired private OrderRepository orderRepository;
@Autowired private PaymentGatewayClient paymentClient;
@Autowired private ShippingService shippingService;
@GetMapping("/{id}")
public OrderDetailResponse getOrder(@PathVariable Long id) {
// OSIV has already opened a Session and acquired a JDBC connection.
Order order = orderRepository.findById(id).orElseThrow();
// Connection used: ~2ms for a primary key lookup.
PaymentStatus status = paymentClient.getStatus(order.getPaymentId());
// Connection idle: 150-800ms waiting on Stripe's API.
// pg_stat_activity shows: state = 'idle in transaction'
// Locks from the SELECT above? Still held.
ShippingEstimate estimate = shippingService.calculate(order);
// Connection idle: 50-300ms waiting on FedEx/UPS API.
// Total idle time climbing.
return new OrderDetailResponse(order, status, estimate);
// View rendering begins. If order.getItems() triggers a lazy load,
// THAT is why OSIV exists — to allow lazy loading in the view layer.
// But the price is holding the connection for the entire request.
}
}
// With OSIV enabled, this endpoint holds a connection for 200-1100ms.
// With 10 connections in HikariCP (the default), you can serve
// at most 9-50 concurrent requests before the pool is exhausted.
// That is not a typo. Nine to fifty. The math is unforgiving. HikariCP's default maximum-pool-size is 10. If each request holds a connection for an average of 500 milliseconds (a conservative estimate when external API calls are involved), the pool can sustain:
10 connections / 0.5 seconds = 20 requests per second
Twenty. For a pool of 10 connections on a PostgreSQL server that can handle thousands of queries per second. The database is idle. The connections are idle. The pool is full. Users are seeing timeouts.
| Scenario | Avg Hold Time | Max RPS (pool=10) | Max RPS (pool=20) | Status |
|---|---|---|---|---|
| OSIV=true, no external APIs | 10ms | 1,000 | 2,000 | Healthy |
| OSIV=true, one external API (200ms) | 210ms | 47 | 95 | Marginal |
| OSIV=true, two external APIs (500ms total) | 510ms | 19 | 39 | Critical |
| OSIV=true, slow external API (2s) | 2,010ms | 4 | 9 | Failed |
| OSIV=false, any external APIs | 5-10ms | 1,000-2,000 | 2,000-4,000 | Healthy |
The last row is the one that matters. With OSIV disabled, the connection hold time drops to the actual database work — 5 to 10 milliseconds. The presence or absence of external API calls becomes irrelevant to pool utilization. Your pool handles the same traffic whether your controller calls zero APIs or five, because the connection is not held during those calls.
With a 1-second average hold time (entirely normal for endpoints that call two or three external services), the ceiling drops to 10 requests per second. With 2-second hold times during Stripe API slowdowns, 5 requests per second. During a Stripe partial outage — when API calls that normally return in 200ms start taking 10 seconds — your entire application stops serving requests. Not because your database is down. Not because your code has a bug. Because Stripe is slow and OSIV is holding your database connections while you wait for Stripe.
The instinct is to increase the pool size. This is the wrong response, for reasons that the HikariCP pool sizing formula makes clear — PostgreSQL performance degrades beyond a small number of connections. Going from 10 to 50 connections does not 5x your throughput. It might 3x your ceiling while adding context switch overhead, lock contention, and memory pressure to your database server. You are treating a symptom by worsening the disease.
What PostgreSQL sees: a wall of idle-in-transaction sessions
The most revealing diagnostic is not on the application side. It is on the PostgreSQL side.
-- What OSIV looks like from PostgreSQL's perspective
-- Run this during normal traffic to see the damage:
SELECT pid,
state,
now() - state_change AS time_in_state,
now() - xact_start AS transaction_age,
LEFT(query, 100) AS last_query,
application_name
FROM pg_stat_activity
WHERE datname = 'mydb'
AND application_name = 'HikariPool-1'
AND state != 'idle'
ORDER BY time_in_state DESC;
-- Typical output with OSIV enabled during normal load:
--
-- pid | state | time_in_state | transaction_age | last_query
-- ------+----------------------+---------------+-----------------+--------------------------
-- 14923 | idle in transaction | 00:00:01.847 | 00:00:01.851 | SELECT ... FROM orders ...
-- 14891 | idle in transaction | 00:00:01.203 | 00:00:01.208 | SELECT ... FROM orders ...
-- 14887 | idle in transaction | 00:00:00.956 | 00:00:00.961 | SELECT ... FROM users ...
-- 14902 | idle in transaction | 00:00:00.744 | 00:00:00.748 | SELECT ... FROM orders ...
-- 14916 | idle in transaction | 00:00:00.531 | 00:00:00.536 | SELECT ... FROM products ...
-- 14894 | idle in transaction | 00:00:00.412 | 00:00:00.418 | SELECT ... FROM orders ...
-- 14909 | idle in transaction | 00:00:00.287 | 00:00:00.291 | SELECT ... FROM users ...
-- 14885 | active | 00:00:00.003 | 00:00:00.008 | SELECT ... FROM orders ...
-- 14930 | active | 00:00:00.001 | 00:00:00.004 | SELECT ... FROM products ...
-- 14898 | idle | 00:00:12.441 | (null) | DISCARD ALL
--
-- 7 out of 10 connections sitting "idle in transaction."
-- The database is doing nothing. The connections are doing nothing.
-- But they are HELD. No other thread can use them.
-- The two "active" connections are the only ones doing actual work. The idle in transaction state is the signature of OSIV under load. Each of those connections has an open transaction (started by the @Transactional service method or by OSIV's session-level transaction), has completed its last query, and is waiting for the application to do something — which happens to be waiting for an HTTP response from Stripe.
These connections are not just wasting pool slots. They are actively harmful to PostgreSQL in three ways:
Lock retention. Any row-level locks or advisory locks acquired during the transaction are held for the duration. A simple SELECT ... FOR UPDATE in the service layer holds the lock for the full request lifecycle, blocking other transactions that need those rows. Even a plain SELECT acquires a RowShareLock on the table, which prevents ALTER TABLE and other DDL from running. If you are wondering why your Flyway migrations hang during deployments, this may be why.
Snapshot retention. PostgreSQL's MVCC system keeps old row versions visible to active transactions. An idle-in-transaction session prevents VACUUM from cleaning up dead tuples created after the transaction started. During traffic spikes, dozens of long-lived OSIV transactions can cause table bloat that persists long after the spike ends. I have seen tables grow by 40% over a single-day incident because OSIV transactions prevented autovacuum from doing its work.
Connection slot consumption. PostgreSQL's max_connections defaults to 100. With OSIV holding connections for seconds instead of milliseconds, you consume those slots 200-1,000x faster than necessary. Three application instances with pool_size=10 and OSIV enabled can consume 30 of your 100 slots during moderate load — and most of those slots are doing nothing. Add a fourth instance during autoscaling and you are at 40. Add a data pipeline, a reporting tool, and an admin console, and you are approaching the limit with a database that is 95% idle.
HikariCP metrics: reading the distress signals
Spring Boot Actuator exposes HikariCP metrics automatically. Here is what pool exhaustion looks like in the numbers.
// HikariCP metrics during OSIV-induced pool exhaustion
// (exposed via Spring Boot Actuator at /actuator/metrics)
// HEALTHY pool (no OSIV, or fast endpoints only):
// hikaricp.connections.active: 2
// hikaricp.connections.idle: 8
// hikaricp.connections.pending: 0
// hikaricp.connections.timeout: 0
// hikaricp.connections.usage.p99: 8ms
// OSIV pool under moderate load (20 req/sec with external API calls):
// hikaricp.connections.active: 10 <-- ALL connections in use
// hikaricp.connections.idle: 0 <-- none available
// hikaricp.connections.pending: 14 <-- 14 threads BLOCKED waiting
// hikaricp.connections.timeout: 847 <-- 847 requests have FAILED
// hikaricp.connections.usage.p99: 1,847ms <-- connections held ~2 seconds
// The pending metric is the pain metric.
// Each pending thread is a user staring at a spinner.
// Each timeout is a user seeing a 500 error.
// The stack trace in the timeout exception:
// java.sql.SQLTransientConnectionException:
// HikariPool-1 - Connection is not available,
// request timed out after 30000ms.
// at com.zaxxer.hikari.pool.HikariPool.createTimeoutException
// at com.zaxxer.hikari.pool.HikariPool.getConnection
// at ...OpenEntityManagerInViewInterceptor.preHandle <-- OSIV The critical metric is hikaricp.connections.pending. A healthy pool has zero pending connections. Any sustained value above zero means application threads are blocked, waiting for a connection that some other thread is holding while it waits for Stripe to respond.
The hikaricp.connections.usage.p99 metric is the diagnostic fingerprint. In a well-behaved application without OSIV (or with OSIV disabled), P99 connection usage is 5-50 milliseconds. When OSIV is holding connections through external API calls, P99 jumps to 500-5,000 milliseconds. If your P99 connection usage is measured in seconds, you have an OSIV problem or a transaction scoping problem. Usually both.
The hikaricp.connections.timeout counter is the damage metric. Each increment represents a request that waited the full connection-timeout (default: 30 seconds) and then failed with a SQLTransientConnectionException. Your user saw a 500 error after staring at a loading spinner for 30 seconds. This is not a graceful degradation. It is a hard failure caused by a configuration default.
I should mention the hikaricp.connections.creation metric as well. In a healthy pool with minimumIdle equal to maximumPoolSize (which is HikariCP's recommendation), connection creation should happen only at startup and during periodic recycling. If you see frequent creation events during traffic spikes, it means connections are being evicted and recreated — often because PostgreSQL's idle_in_transaction_session_timeout is terminating OSIV-held connections. The pool is fighting against the timeout. Neither side wins.
Connection states under OSIV: the table that tells the story
Understanding what each PostgreSQL connection state means under OSIV clarifies why the pool empties so quickly.
| Connection State | What It Means | OSIV Behavior | Typical % |
|---|---|---|---|
active | Executing a query right now | Normal. Brief. | 5-20% |
idle | Connection open, no transaction | Only between requests. Rare under load. | 0-10% |
idle in transaction | Transaction open, no query running | OSIV default. Connection held for entire HTTP request. | 70-90% |
idle in transaction (aborted) | Transaction hit an error, not yet rolled back | Worst case. Connection stuck until timeout. | 0-5% |
In a healthy pool without OSIV, you expect 80-95% of connections to be idle (available) and 5-20% active (running queries). With OSIV, the distribution inverts: 70-90% of connections are idle in transaction, held by requests waiting on non-database work. The pool appears full despite the database being nearly idle.
This is the insidious part. Your PostgreSQL dashboard shows CPU at 5%, your query latencies at 2ms, your pg_stat_statements clean. Everything looks healthy — from the database's perspective. The problem is entirely in the connection lifecycle management, invisible to database monitoring and visible only in pool metrics or pg_stat_activity state distribution.
The OSIV tax on query performance: what pg_stat_statements reveals
There is a subtler dimension to the OSIV problem that pg_stat_statements makes visible — and it has nothing to do with connection hold time.
-- Using pg_stat_statements to measure the OSIV tax
-- This query shows the gap between query execution time and
-- connection hold time for your most expensive queries.
SELECT
LEFT(query, 80) AS query,
calls,
ROUND(total_exec_time::numeric, 1) AS total_ms,
ROUND(mean_exec_time::numeric, 2) AS mean_ms,
ROUND(max_exec_time::numeric, 1) AS max_ms,
rows
FROM pg_stat_statements
WHERE dbid = (SELECT oid FROM pg_database WHERE datname = 'mydb')
ORDER BY total_exec_time DESC
LIMIT 10;
-- Typical output:
-- query | calls | total_ms | mean_ms | max_ms | rows
-- ---------------------------------------------+-------+----------+---------+--------+------
-- SELECT o.* FROM orders o WHERE o.id = $1 | 84201 | 1247.3 | 0.01 | 2.4 | 84201
-- SELECT i.* FROM order_items i WHERE i.orde.. | 84201 | 987.6 | 0.01 | 1.8 | 241k
-- SELECT u.* FROM users u WHERE u.id = $1 | 31044 | 412.8 | 0.01 | 3.1 | 31044
-- SELECT p.* FROM products p WHERE p.id = $1 | 241k | 3841.2 | 0.02 | 4.7 | 241k
-- These queries are FAST. Mean execution: 0.01-0.02ms.
-- PostgreSQL is doing its job admirably. The queries themselves are fine.
-- But look at the connection-level metrics from HikariCP:
-- hikaricp.connections.usage.mean: 847ms
-- hikaricp.connections.usage.p99: 2,140ms
-- The queries take 0.01ms. The connections are held for 847ms.
-- That ratio — 0.01ms query time vs 847ms hold time — is OSIV.
-- The connection is idle for 99.999% of the time it is checked out. OSIV does not make individual queries slower. Your SELECT statements execute in the same time whether OSIV is on or off. What OSIV does is enable lazy loading patterns that generate far more queries than you intended. Because the Session is open during JSON serialization, every lazy association that Jackson touches fires a query. You wrote one repository call. Hibernate executed twenty-six.
The pg_stat_statements output shows the evidence: thousands of small, fast queries to order_items, products, users — each one triggered by a lazy association during serialization. Each individual query is fast. But twenty-six queries per request, multiplied by thousands of requests, adds up. And each of those queries extends the connection hold time by a fraction of a millisecond, which under OSIV means extending the total hold time of a connection that is already being held far too long.
Without OSIV, these lazy loads would throw LazyInitializationException. You would be forced to write a proper JOIN FETCH or @EntityGraph query. One query instead of twenty-six. That is not just a connection efficiency improvement. It is a genuine query optimization that OSIV prevented you from making.
How OSIV hides N+1 queries: an EXPLAIN ANALYZE demonstration
Allow me to make the previous point concrete with actual query plans. This is what happens when an endpoint returns a list of orders with their items, and OSIV silently loads the items collection during serialization.
-- The difference in query plans is not the problem.
-- The queries themselves are identical with or without OSIV.
-- What changes is WHEN the connection is held.
-- But OSIV often masks N+1 queries by making lazy loading invisible.
-- Without OSIV, you would have seen LazyInitializationException
-- and been forced to write a proper JOIN FETCH. With OSIV, Hibernate
-- silently fires these queries during JSON serialization:
-- Request to GET /api/orders?status=PENDING (returns 25 orders)
-- Query 1: Load orders
EXPLAIN ANALYZE
SELECT o.id, o.status, o.created_at, o.customer_id
FROM orders o
WHERE o.status = 'PENDING'
ORDER BY o.created_at DESC
LIMIT 25;
-- Planning Time: 0.142 ms
-- Execution Time: 0.387 ms
--
-- Index Scan using idx_orders_status_created on orders o
-- Index Cond: (status = 'PENDING')
-- Rows Removed by Filter: 0
-- Actual Rows: 25
-- Buffers: shared hit=28
-- Then, during Jackson serialization (with OSIV keeping the Session open),
-- accessing order.getItems() on each of the 25 orders fires:
-- Queries 2-26: One per order (the N+1 pattern)
EXPLAIN ANALYZE
SELECT i.id, i.order_id, i.product_id, i.quantity, i.unit_price
FROM order_items i
WHERE i.order_id = 42; -- repeated 25 times with different IDs
-- Planning Time: 0.089 ms
-- Execution Time: 0.134 ms (per query)
--
-- Index Scan using idx_order_items_order_id on order_items i
-- Index Cond: (order_id = 42)
-- Actual Rows: 3
-- Buffers: shared hit=4
-- Total: 1 + 25 = 26 queries
-- Total execution time: 0.387 + (25 * 0.134) = 3.737ms of DB time
-- Total connection hold time with OSIV: ~850ms (includes serialization)
-- Connection utilization: 0.4%
-- The fix (one query with JOIN FETCH):
EXPLAIN ANALYZE
SELECT o.id, o.status, o.created_at, o.customer_id,
i.id, i.product_id, i.quantity, i.unit_price
FROM orders o
LEFT JOIN order_items i ON i.order_id = o.id
WHERE o.status = 'PENDING'
ORDER BY o.created_at DESC;
-- Planning Time: 0.298 ms
-- Execution Time: 0.912 ms
--
-- Sort
-- Sort Key: o.created_at DESC
-- Sort Method: quicksort Memory: 42kB
-- -> Hash Right Join
-- Hash Cond: (i.order_id = o.id)
-- -> Seq Scan on order_items i
-- Actual Rows: 847
-- -> Hash
-- -> Index Scan using idx_orders_status_created on orders o
-- Index Cond: (status = 'PENDING')
-- Actual Rows: 25
-- Buffers: shared hit=28
-- 1 query instead of 26. 0.912ms instead of 3.737ms.
-- But the real win: connection held for 1ms, not 850ms. The N+1 pattern is not new. Every JPA tutorial warns about it. But OSIV makes it invisible. Without OSIV, the first call to order.getItems() outside a transaction throws an exception. The developer writes a JOIN FETCH. Problem solved at development time. With OSIV, the first call to order.getItems() works perfectly — Hibernate silently fires a query, loads the data, and returns it. The developer never knows. The 25 extra queries never show up in a code review. They only show up in pg_stat_statements and in your P99 latency.
I am not suggesting that OSIV was designed to hide bad patterns. The original intent — supporting lazy loading in view templates — was legitimate. But the effect, in modern REST API applications, is that OSIV converts compile-time errors (LazyInitializationException) into runtime performance degradation (silent N+1 queries plus connection hold time). That is, if you will permit me a brief editorial comment, the worst possible trade-off. I would rather have a loud failure at development time than a quiet one at 3 AM in production.
The fix: disable OSIV
One line. That is all it takes to disable OSIV.
# application.yml — the one-line fix
spring:
jpa:
open-in-view: false
# Or application.properties:
# spring.jpa.open-in-view=false
# That is it. One property. The warning disappears.
# The connection is no longer held for the entire request lifecycle.
#
# What changes:
# - The Session/EntityManager is scoped to @Transactional boundaries only
# - JDBC connections are acquired when a transaction starts
# - JDBC connections are returned when the transaction commits/rollbacks
# - Lazy loading outside a transaction throws LazyInitializationException
#
# That last point is the reason OSIV exists in the first place.
# It is also the reason disabling it requires code changes. Disabling OSIV means the JPA Session (and its JDBC connection) is scoped to @Transactional method boundaries only. The connection is acquired when a transactional method is entered and returned to the pool when it exits. No connection is held during external API calls, JSON serialization, or view rendering.
The connection timeline changes dramatically:
// Connection timeline comparison: OSIV=true vs OSIV=false
// Same endpoint, same code, same external API calls.
// ═══════════════════════════════════════════════════════════════════
// OSIV=true (default)
// ═══════════════════════════════════════════════════════════════════
//
// Thread-1: [===CONN HELD====================================]
// | DB | Stripe API | FedEx | DB | JSON |
// 0ms 3ms 503ms 803ms 805ms 810ms
//
// Connection hold time: 810ms
// Database work: 5ms
// Idle-in-transaction: 805ms
// Pool connections used: 1 for 810ms
//
// At 20 req/sec, you need: 20 * 0.810 = 16.2 connections
// HikariCP default pool: 10
// Result: pool exhaustion at 12 req/sec
// ═══════════════════════════════════════════════════════════════════
// OSIV=false (recommended)
// ═══════════════════════════════════════════════════════════════════
//
// Thread-1: [=CONN=] [=CONN=]
// | DB | Stripe API | FedEx | DB | JSON |
// 0ms 3ms 503ms 803ms 805ms 810ms
//
// Connection hold time: 3ms + 2ms = 5ms (two short checkouts)
// Database work: 5ms
// Idle-in-transaction: 0ms
// Pool connections used: 1 for 5ms total
//
// At 20 req/sec, you need: 20 * 0.005 = 0.1 connections
// HikariCP default pool: 10
// Result: pool is 99% idle at 20 req/sec
// Result: pool handles 2,000 req/sec before saturation The throughput difference is not incremental. It is categorical. The same pool of 10 connections goes from supporting 12 requests per second to supporting 2,000 requests per second. Same database. Same queries. Same pool size. The only change is when the connection is held and released.
The cost of disabling OSIV: LazyInitializationException
There is a reason OSIV is still the default. Disabling it breaks code that relies on lazy loading outside of @Transactional boundaries — which, in many Spring Boot applications, means a lot of code.
// Before: OSIV masks lazy loading problems
@RestController
public class OrderController {
@GetMapping("/orders/{id}")
public OrderResponse getOrder(@PathVariable Long id) {
Order order = orderService.findById(id); // @Transactional
// Transaction is committed. Connection returned to pool.
// With OSIV=true, this works — OSIV keeps the Session open.
// With OSIV=false, this throws LazyInitializationException:
List<OrderItem> items = order.getItems(); // LAZY collection
return new OrderResponse(order, items);
}
}
// After: explicit fetching in the service layer
@Service
public class OrderService {
@Transactional(readOnly = true)
public OrderWithItems findByIdWithItems(Long id) {
Order order = orderRepository.findById(id).orElseThrow();
// Force initialization inside the transaction:
Hibernate.initialize(order.getItems());
// Or use a fetch join query:
// Order order = orderRepository.findByIdWithItems(id);
return new OrderWithItems(order, order.getItems());
}
}
// Repository with explicit fetch join:
public interface OrderRepository extends JpaRepository<Order, Long> {
@Query("SELECT o FROM Order o JOIN FETCH o.items WHERE o.id = :id")
Optional<Order> findByIdWithItems(@Param("id") Long id);
// Or use an EntityGraph:
@EntityGraph(attributePaths = {"items", "items.product"})
Optional<Order> findWithItemsById(Long id);
} The most common patterns that break when OSIV is disabled:
1. Lazy collections accessed in controllers. If your controller returns an entity and the serializer (Jackson) accesses a lazy @OneToMany collection, it triggers a database load. Without OSIV, the Session is closed, and you get a LazyInitializationException. Fix: use JOIN FETCH queries, @EntityGraph, or DTOs in the service layer.
2. Lazy associations in view templates. Thymeleaf, Freemarker, or JSP templates that access entity properties requiring lazy loading. Fix: fetch everything needed in the service layer and pass DTOs to the view.
3. Hibernate.initialize() called outside @Transactional. Utility code that forces lazy loading outside a transaction boundary. Fix: move the initialization into the transactional service method.
Every one of these fixes makes your code more explicit about its data access patterns. That is not a side effect — it is the point. OSIV hides data access behind lazy-loading magic. Disabling OSIV forces you to declare what data each endpoint needs, when it is loaded, and how it is fetched. This is better architecture. It is also the architecture that does not exhaust your connection pool.
The migration is not trivial for large codebases. A Spring Boot application with 200 endpoints may have 30-50 that rely on OSIV's lazy loading behavior. Each requires a JOIN FETCH query or DTO transformation in the service layer. But the alternative — holding database connections hostage during HTTP calls indefinitely — is not sustainable. The migration work is finite. The production incidents are not.
EntityGraph: the surgical tool for post-OSIV fetching
When you disable OSIV, @EntityGraph becomes your most important tool. It allows you to declare, per query, exactly which associations to load — without writing JPQL or native SQL.
// EntityGraph: the surgical alternative to JOIN FETCH
// Define the graph on the entity:
@Entity
@NamedEntityGraph(
name = "Order.withItemsAndProducts",
attributeNodes = {
@NamedAttributeNode(value = "items", subgraph = "items-subgraph")
},
subgraphs = {
@NamedSubgraph(
name = "items-subgraph",
attributeNodes = { @NamedAttributeNode("product") }
)
}
)
public class Order {
@Id
private Long id;
@OneToMany(mappedBy = "order", fetch = FetchType.LAZY)
private List<OrderItem> items;
// ...
}
// Use it in the repository:
public interface OrderRepository extends JpaRepository<Order, Long> {
// Method 1: annotation-based EntityGraph
@EntityGraph(value = "Order.withItemsAndProducts", type = EntityGraph.EntityGraphType.LOAD)
Optional<Order> findById(Long id);
// Method 2: ad-hoc EntityGraph for a specific query
@EntityGraph(attributePaths = {"items", "items.product", "customer"})
@Query("SELECT o FROM Order o WHERE o.status = :status")
List<Order> findByStatusWithDetails(@Param("status") String status);
}
// What Hibernate generates for the EntityGraph:
//
// SELECT o.id, o.status, o.created_at,
// i.id, i.quantity, i.price,
// p.id, p.name, p.sku
// FROM orders o
// LEFT JOIN order_items i ON i.order_id = o.id
// LEFT JOIN products p ON p.id = i.product_id
// WHERE o.id = ?
//
// One query. All related data loaded. No N+1. No OSIV required. EntityGraph has a meaningful advantage over JOIN FETCH in JPQL: it is composable. You can define named graphs on the entity and reference them from different repository methods. A "list" endpoint uses a lightweight graph (items only). A "detail" endpoint uses a heavier graph (items, items.product, customer, shippingAddress). The entity definition is the same. The fetching strategy varies by use case.
There is a trap to be aware of. @EntityGraph with multiple @OneToMany collections produces a Cartesian product — if an order has 5 items and 3 shipments, the query returns 15 rows. Hibernate deduplicates in memory, but the wire transfer is wasteful. For entities with multiple collections, use separate queries or a @BatchSize annotation instead. The honest recommendation is: @EntityGraph for single-collection fetching, separate queries for multi-collection fetching.
DTO projections: the cleanest exit from OSIV dependency
If I may suggest the approach I find most elegant: stop returning entities from your service layer entirely. Return DTO projections that contain exactly the data the consumer needs. No lazy associations. No Hibernate proxies. No OSIV required.
// DTO projections: the cleanest exit from OSIV dependency
// Instead of returning entities (which have lazy associations),
// return exactly the data you need.
// Spring Data JPA interface projection:
public interface OrderSummary {
Long getId();
String getStatus();
LocalDateTime getCreatedAt();
String getCustomerName(); // derived from JOIN
int getItemCount(); // derived from COUNT
BigDecimal getTotalAmount(); // derived from SUM
}
public interface OrderRepository extends JpaRepository<Order, Long> {
@Query("""
SELECT o.id AS id,
o.status AS status,
o.createdAt AS createdAt,
c.name AS customerName,
COUNT(i.id) AS itemCount,
SUM(i.quantity * i.unitPrice) AS totalAmount
FROM Order o
JOIN o.customer c
LEFT JOIN o.items i
WHERE o.status = :status
GROUP BY o.id, o.status, o.createdAt, c.name
ORDER BY o.createdAt DESC
""")
List<OrderSummary> findOrderSummariesByStatus(@Param("status") String status);
}
// What Hibernate generates:
// SELECT o.id, o.status, o.created_at, c.name,
// COUNT(i.id), SUM(i.quantity * i.unit_price)
// FROM orders o
// JOIN customers c ON c.id = o.customer_id
// LEFT JOIN order_items i ON i.order_id = o.id
// WHERE o.status = 'PENDING'
// GROUP BY o.id, o.status, o.created_at, c.name
// ORDER BY o.created_at DESC
// Benefits:
// - No entity returned = no lazy associations = no OSIV needed
// - SELECT only the columns you need = less data over the wire
// - Aggregation in SQL = no application-side loop to sum items
// - Immutable projection = thread-safe, cacheable
// - Connection held only during the query execution (~1-3ms) Interface projections are the lightest-weight option. Spring Data JPA generates the implementation at runtime. The query selects only the columns mapped by the interface methods. No entity is loaded. No Hibernate first-level cache is populated. No dirty checking happens. The connection is held for the duration of the query — typically 1-3 milliseconds — and released immediately.
For more complex projections, record classes (Java 16+) work well:
The trade-off is that projections are read-only. You cannot modify a projection and save it back. For write operations, you still load the entity inside a @Transactional method, modify it, and save. But for the vast majority of endpoints — the read-heavy endpoints that dominate most applications — projections eliminate the OSIV question entirely. There is nothing to lazy-load. There is no session to keep open. There is no connection to hold.
"Every service in your infrastructure is a commitment. It requires monitoring, alerting, backup strategies, security patches, and at least one engineer who understands its failure modes at 3 AM."
— from You Don't Need Redis, Chapter 2: The Infrastructure You Were Told You Needed
Read-only transactions: a significant optimization you are likely not using
While we are refactoring the service layer, allow me to draw attention to an annotation attribute that most Spring developers ignore.
// @Transactional(readOnly = true): a significant optimization
@Service
public class OrderQueryService {
// Read-only transactions tell Hibernate to skip dirty checking.
// Hibernate normally compares every loaded entity to its original
// state at flush time — even if you only read it. ReadOnly skips this.
@Transactional(readOnly = true)
public List<OrderSummary> getRecentOrders(String status, int limit) {
return orderRepository.findOrderSummariesByStatus(status);
}
// What readOnly = true actually does:
// 1. Sets the Session FlushMode to MANUAL (no auto-flush)
// 2. PostgreSQL receives SET transaction_read_only = on
// 3. Hibernate skips dirty checking (no snapshot comparison)
// 4. PostgreSQL can route to read replicas (if your proxy supports it)
//
// Performance impact:
// - Dirty checking on 100 entities: ~0.5-2ms per flush
// - With readOnly = true: 0ms (no flush, no comparison)
// - For list endpoints returning 50-500 entities, this saves
// measurable CPU time in both Hibernate and PostgreSQL
// DO NOT use readOnly for methods that write data.
// PostgreSQL will throw: ERROR: cannot execute INSERT in a read-only transaction
}
// Combine readOnly with DTO projections for maximum efficiency:
// - readOnly = true: no dirty checking, no flush
// - DTO projection: no entity tracking, no lazy associations
// - Result: connection held for query duration only (~1-5ms),
// zero Hibernate overhead, no OSIV dependency The combination of readOnly = true and DTO projections is the most connection-efficient pattern available in Spring Boot + JPA. The connection is held for the minimum possible time (query execution only), Hibernate performs no dirty checking, and PostgreSQL can optimize for read-only access. If Gold Lapel or another proxy is routing queries to read replicas, the readOnly flag is the signal it uses.
I am occasionally asked whether readOnly truly matters for performance. The answer depends on entity count. For a method that loads 3 entities, dirty checking takes microseconds — negligible. For a method that loads 500 entities (a paginated list with associations), dirty checking compares 500 entity snapshots field by field. That is measurable. With readOnly = true, the comparison does not happen. The flush does not happen. The CPU time is reclaimed.
The @Transactional antipattern that OSIV conceals
OSIV is half the problem. The other half is @Transactional methods that span non-database work. OSIV often masks this issue because the connection is already held for the full request — widening the @Transactional boundary does not appear to make things worse. But it does, in ways that survive even after OSIV is disabled.
// The @Transactional antipattern that OSIV enables and conceals:
@Service
public class OrderService {
@Transactional
public OrderResult processOrder(CreateOrderRequest request) {
// 1. Database work: create order (~3ms)
Order order = orderRepository.save(new Order(request));
// 2. External API: charge payment (~500ms)
// Connection is idle. Transaction is open. Locks are held.
PaymentResult payment = paymentGateway.charge(request.getPaymentInfo());
// 3. External API: reserve inventory (~200ms)
// Still idle. Still holding the connection.
inventoryService.reserve(order.getItems());
// 4. External API: send confirmation email (~100ms)
// 800ms of idle connection time and counting.
emailService.sendConfirmation(order);
// 5. Database work: update order status (~2ms)
order.setStatus(payment.isSuccess() ? "CONFIRMED" : "FAILED");
orderRepository.save(order);
return new OrderResult(order);
}
// Total connection hold time: ~805ms
// Actual database time: ~5ms
// Connection utilization: 0.6%
}
// The fix: narrow the transaction boundary
@Service
public class OrderService {
@Transactional
public Order createOrder(CreateOrderRequest request) {
return orderRepository.save(new Order(request));
// Connection held: ~3ms. Released immediately.
}
// No @Transactional — no connection held during external calls
public OrderResult processOrder(CreateOrderRequest request) {
Order order = createOrder(request);
PaymentResult payment = paymentGateway.charge(request.getPaymentInfo());
inventoryService.reserve(order.getItems());
emailService.sendConfirmation(order);
return confirmOrder(order.getId(), payment);
}
@Transactional
public OrderResult confirmOrder(Long orderId, PaymentResult payment) {
Order order = orderRepository.findById(orderId).orElseThrow();
order.setStatus(payment.isSuccess() ? "CONFIRMED" : "FAILED");
orderRepository.save(order);
return new OrderResult(order);
// Connection held: ~2ms. Released immediately.
}
}
// Total connection hold time across both transactions: ~5ms
// Same functionality. 160x less connection pressure. The restructured version acquires a connection twice — once for 3 milliseconds, once for 2 milliseconds — instead of once for 805 milliseconds. The connection utilization goes from 0.6% to nearly 100%. The pool can serve 160 times more requests per second.
This pattern also improves resilience. If the Stripe API call fails in the original version, the entire transaction rolls back, including the order creation. In the restructured version, the order is created in its own transaction. Payment failure can be handled as a status update rather than a rollback, preserving the order for retry.
Audit your @Transactional methods. Any method that calls an external service, sends an email, publishes a message, or performs significant computation should not be @Transactional. The database work should be in small, focused @Transactional methods. The orchestration — the calls to external services, the conditional logic — should live outside the transaction boundary.
The self-invocation trap: why your narrowed transactions might not work
There is a detail about Spring's @Transactional that catches teams during OSIV migration. It is important enough to warrant its own section, because the failure mode is silent and the debugging is painful.
// A common trap when narrowing @Transactional boundaries:
// Spring's @Transactional uses AOP proxies. Self-invocation bypasses the proxy.
@Service
public class OrderService {
// THIS DOES NOT WORK as expected:
public OrderResult processOrder(CreateOrderRequest request) {
Order order = createOrder(request); // Direct call — no proxy!
// createOrder() runs WITHOUT a transaction, because Spring's
// proxy is not involved in self-invocation.
// ...
}
@Transactional
public Order createOrder(CreateOrderRequest request) {
return orderRepository.save(new Order(request));
}
}
// Fix 1: Extract to a separate @Service class (simplest, recommended)
@Service
public class OrderPersistenceService {
@Transactional
public Order createOrder(CreateOrderRequest request) {
return orderRepository.save(new Order(request));
}
@Transactional
public OrderResult confirmOrder(Long orderId, PaymentResult payment) {
Order order = orderRepository.findById(orderId).orElseThrow();
order.setStatus(payment.isSuccess() ? "CONFIRMED" : "FAILED");
return new OrderResult(orderRepository.save(order));
}
}
@Service
public class OrderService {
@Autowired private OrderPersistenceService persistence;
public OrderResult processOrder(CreateOrderRequest request) {
Order order = persistence.createOrder(request);
// Connection released. External calls happen here.
PaymentResult payment = paymentGateway.charge(request.getPaymentInfo());
inventoryService.reserve(order.getItems());
emailService.sendConfirmation(order);
return persistence.confirmOrder(order.getId(), payment);
}
}
// Fix 2: Inject the bean into itself (works, but reads strangely)
// Fix 3: Use TransactionTemplate for programmatic control (next example) Spring's @Transactional works through AOP proxies. When you call a @Transactional method from outside the class, Spring's proxy intercepts the call and wraps it in a transaction. When you call a @Transactional method from inside the same class (self-invocation), the proxy is bypassed. The method runs without a transaction. Your carefully narrowed transaction boundary does not exist.
The fix is straightforward: extract the transactional methods into a separate @Service class. This is not just a workaround for a Spring limitation — it produces better architecture. The persistence service handles database operations. The orchestration service handles business logic and external calls. The separation of concerns is clean, testable, and — most importantly — each @Transactional method actually runs in a transaction.
TransactionTemplate: programmatic control for complex flows
For cases where extracting a separate service class feels excessive — utility methods, one-off batch operations, or methods with multiple conditional transaction boundaries — Spring provides TransactionTemplate as a programmatic alternative to @Transactional.
// TransactionTemplate: programmatic transaction boundaries
// Useful when you need fine-grained control without creating extra classes.
@Service
public class OrderService {
@Autowired private TransactionTemplate txTemplate;
@Autowired private OrderRepository orderRepository;
public OrderResult processOrder(CreateOrderRequest request) {
// Transaction 1: create the order (connection held ~3ms)
Order order = txTemplate.execute(status -> {
return orderRepository.save(new Order(request));
});
// No transaction. No connection. External calls are free.
PaymentResult payment = paymentGateway.charge(request.getPaymentInfo());
inventoryService.reserve(order.getItems());
emailService.sendConfirmation(order);
// Transaction 2: update the order status (connection held ~2ms)
return txTemplate.execute(status -> {
Order o = orderRepository.findById(order.getId()).orElseThrow();
o.setStatus(payment.isSuccess() ? "CONFIRMED" : "FAILED");
orderRepository.save(o);
return new OrderResult(o);
});
}
}
// TransactionTemplate advantages:
// - No AOP proxy gotchas — transactions are explicit, not annotation-magic
// - Fine-grained control within a single method
// - Clear visual boundary: "this lambda holds a connection, nothing else does"
// - Works with both read-write and read-only transactions:
// Read-only transaction:
txTemplate.setReadOnly(true);
List<Order> orders = txTemplate.execute(status -> {
return orderRepository.findByStatus("PENDING");
}); TransactionTemplate has no proxy gotcha. The transaction boundary is the lambda. What is inside the lambda runs in a transaction. What is outside does not. The JDBC connection is acquired when the lambda starts and released when it returns. There is no ambiguity, no annotation magic, and no self-invocation trap.
I should offer the honest counterpoint: TransactionTemplate is more verbose than @Transactional. For straightforward service methods with a single transaction, the annotation is cleaner. TransactionTemplate earns its keep when you need multiple transaction boundaries within a single method, or when you want to make the transaction scope visually obvious in a complex orchestration flow. Use both tools where they fit best.
Detecting OSIV problems with HikariCP leak detection
If you suspect OSIV is causing pool pressure but want evidence before making changes, HikariCP's built-in leak detection will show you exactly what is happening.
# HikariCP leak detection — your best diagnostic tool
spring:
datasource:
hikari:
maximum-pool-size: 10
leak-detection-threshold: 2000 # 2 seconds (in milliseconds)
# When a connection is held longer than 2 seconds, HikariCP logs:
#
# WARN com.zaxxer.hikari.pool.ProxyLeakTask -
# Connection leak detection triggered for conn0: url=jdbc:postgresql://...,
# stack trace follows:
# java.lang.Exception: Apparent connection leak detected
# at com.zaxxer.hikari.pool.ProxyLeakTask.run(ProxyLeakTask.java:35)
# ...
# at com.example.OrderController.getOrder(OrderController.java:47)
# at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
# at org.springframework.web.servlet.FrameworkServlet.service(...)
# at org.springframework.web.servlet.DispatcherServlet.doDispatch(...)
# at org.springframework.web.filter.OpenEntityManagerInViewFilter.doFilterInternal(...)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# OSIV is in the stack trace.
#
# Every single "leak" will trace back to OpenEntityManagerInViewFilter
# or OpenEntityManagerInViewInterceptor. That is not a coincidence.
# That is the diagnosis. Set the leak-detection-threshold to a value slightly above your expected maximum database operation time. For most applications, 2 seconds is appropriate — legitimate database operations rarely hold a connection longer than this, but OSIV-affected requests routinely exceed it.
Every stack trace produced by the leak detector will show the full call chain from the HikariCP pool through the OSIV interceptor to your controller method. The OpenEntityManagerInViewFilter or OpenEntityManagerInViewInterceptor will appear in every trace. Once you have seen this pattern in 50 consecutive leak warnings, the diagnosis is conclusive.
The leak detector does not close the connection or terminate the request. It only logs the warning. This makes it safe to enable in production as a diagnostic tool. The performance impact is one ScheduledFuture per connection checkout — negligible.
Finding hidden OSIV queries with Hibernate statistics
HikariCP leak detection tells you that connections are held too long. Hibernate statistics tell you why — specifically, how many queries each request executes and whether lazy loading is inflating that count.
// Hibernate statistics: finding hidden OSIV queries
// Enable in application.yml (dev/staging only — adds overhead):
spring:
jpa:
properties:
hibernate:
generate_statistics: true
# Hibernate logs per-session statistics after each request:
#
# Session Metrics {
# 84723 nanoseconds spent acquiring 1 JDBC connections;
# 0 nanoseconds spent releasing 0 JDBC connections;
# 1847291 nanoseconds spent preparing 26 JDBC statements;
# 4921847 nanoseconds spent executing 26 JDBC statements;
# 0 nanoseconds spent executing 0 JDBC batches;
# 0 nanoseconds spent performing 0 L2C puts;
# 0 nanoseconds spent performing 0 L2C hits;
# 0 nanoseconds spent performing 0 L2C misses;
# 1241893 nanoseconds spent executing 1 flushes;
# 0 nanoseconds spent executing 0 partial-flushes;
# }
#
# The telling number: "26 JDBC statements" for a single request.
# 1 query to load the entity, 25 queries from lazy loading during
# serialization. OSIV made all 25 lazy loads possible — and invisible.
#
# With OSIV disabled, you would have seen LazyInitializationException
# on the first lazy access. Instead, you got 25 silent SELECT statements
# that each hold the connection a little longer.
// Programmatic access to statistics:
@Component
public class HibernateStatsLogger implements HandlerInterceptor {
@Autowired private EntityManagerFactory emf;
@Override
public void afterCompletion(HttpServletRequest req, HttpServletResponse res,
Object handler, Exception ex) {
Statistics stats = emf.unwrap(SessionFactory.class).getStatistics();
long queryCount = stats.getQueryExecutionCount();
if (queryCount > 10) {
log.warn("Request {} executed {} queries — review for N+1",
req.getRequestURI(), queryCount);
}
stats.clear();
}
} The "26 JDBC statements" figure in the session metrics is the smoking gun. If your controller makes one explicit repository call but Hibernate executes 26 statements, the other 25 are lazy loads. With OSIV on, they execute silently during serialization. With OSIV off, they throw LazyInitializationException. The latter is better — it tells you at development time that your data access pattern needs attention.
For production systems where you cannot enable full Hibernate statistics (the overhead is measurable at high throughput), the programmatic approach with a HandlerInterceptor is effective. Log a warning when any request executes more than 10 queries. These warnings become your OSIV migration checklist — each one identifies an endpoint that needs explicit fetching.
PostgreSQL-side defense: idle_in_transaction_session_timeout
While you work on disabling OSIV in the application, PostgreSQL offers a safety net that limits the blast radius of idle-in-transaction sessions.
-- PostgreSQL-side defense against OSIV: idle_in_transaction_session_timeout
-- This does not fix OSIV. It limits the blast radius.
-- postgresql.conf (or ALTER SYSTEM):
ALTER SYSTEM SET idle_in_transaction_session_timeout = '30s';
SELECT pg_reload_conf();
-- Effect: PostgreSQL will TERMINATE any session that sits in
-- 'idle in transaction' state for longer than 30 seconds.
-- What HikariCP sees:
-- PSQLException: FATAL: terminating connection due to
-- idle-in-transaction timeout
--
-- HikariCP detects the broken connection, evicts it from the pool,
-- and creates a replacement. The thread that was using the connection
-- gets an exception. Your application must handle this gracefully.
-- This is a safety net, not a solution. A well-behaved application
-- should never trigger this timeout. But with OSIV enabled and
-- external APIs having a bad day, 30-second idle-in-transaction
-- durations are entirely possible.
-- Recommended settings for OSIV-affected applications:
-- idle_in_transaction_session_timeout = '30s' -- kill stuck sessions
-- statement_timeout = '30s' -- kill runaway queries
-- idle_session_timeout = '10min' -- kill abandoned sessions (PG 14+)
-- Coordinate with HikariCP:
-- spring.datasource.hikari.max-lifetime should be LESS than
-- idle_session_timeout to ensure HikariCP retires connections
-- before PostgreSQL kills them. Dead connections in the pool
-- cause checkout failures until HikariCP's keepaliveTime or
-- connectionTestQuery detects the problem. I want to be direct about this: idle_in_transaction_session_timeout is a safety net, not a solution. It terminates sessions that have been idle in a transaction for too long, which reclaims the connection slot and allows VACUUM to proceed. But the terminated session causes an exception in your application, and the user's request fails. You are trading a slow failure (30-second pool timeout) for a fast failure (PostgreSQL kills the session after 30 seconds of inactivity).
The fast failure is better. A 30-second pool timeout means your user waits 30 seconds and then sees an error. A PostgreSQL session timeout means the connection is freed for other requests — the overall system degrades less. But neither is acceptable as a long-term posture. The correct answer remains: disable OSIV and stop holding connections during non-database work.
Monitoring for OSIV symptoms with Spring Boot Actuator
Once you have disabled OSIV — or while you are evaluating whether to disable it — proper monitoring will tell you if the problem is solved or merely deferred.
# Spring Boot Actuator + Micrometer: monitoring for OSIV symptoms
# Add to application.yml:
management:
endpoints:
web:
exposure:
include: health,metrics,hikaricp
metrics:
tags:
application: my-spring-app
# Key metrics to alert on:
# 1. Pool exhaustion warning (alert at > 0 for > 30 seconds)
hikaricp.connections.pending > 0
# 2. Connection usage P99 (alert if > 500ms)
hikaricp.connections.usage{quantile="0.99"} > 500
# 3. Timeout counter (alert on any increment)
hikaricp.connections.timeout > previous_value
# 4. Active connection ratio (alert if > 80% for > 60 seconds)
hikaricp.connections.active / hikaricp.connections.max > 0.8
# Grafana query for connection usage distribution:
# histogram_quantile(0.99,
# rate(hikaricp_connections_usage_seconds_bucket[5m]))
# A healthy Spring Boot + PostgreSQL application looks like:
# pending: 0 (always)
# active/max: 10-30% (brief spikes to 50% are normal)
# usage.p99: 5-50ms
# timeout: 0 (ever-increasing = production incident)
#
# An OSIV-affected application looks like:
# pending: 0-20 (fluctuates with traffic)
# active/max: 80-100% (sustained)
# usage.p99: 500-5000ms
# timeout: climbing The before-and-after difference in these metrics is the most convincing argument for disabling OSIV. Teams that have been running with OSIV-induced pool pressure for months see the metrics change within minutes of deploying the open-in-view=false configuration. The usage.p99 drops from seconds to milliseconds. The pending metric goes to zero and stays there. The timeout counter stops incrementing. The pager stops going off.
I would recommend setting alerts on these metrics before and after any OSIV migration. The alerts serve two purposes: they validate that the migration worked, and they catch any regressions if someone introduces a new endpoint with a wide @Transactional boundary.
A staged migration plan for large applications
For applications with dozens or hundreds of endpoints, disabling OSIV in one deployment is possible but risky. A staged approach reduces risk while delivering incremental improvements.
// Migrating a large Spring Boot application away from OSIV
// A staged approach for applications with 100+ endpoints.
// Phase 1: Instrument (1 day)
// Enable leak detection, add Actuator metrics, identify worst endpoints.
spring:
datasource:
hikari:
leak-detection-threshold: 2000
jpa:
open-in-view: true # Keep OSIV on — we are observing, not changing
// Phase 2: Fix the worst offenders (1-2 weeks)
// With OSIV still ON, refactor the top 5-10 endpoints by connection hold time:
// - Replace lazy loading with JOIN FETCH or EntityGraph
// - Narrow @Transactional boundaries around external API calls
// - Switch to DTO projections where practical
// These fixes work with or without OSIV. They reduce connection hold time NOW.
// Phase 3: Write integration tests for LazyInitializationException (1 week)
// Before disabling OSIV, write tests that verify every endpoint
// returns correct data. When OSIV is disabled, any endpoint that
// relied on lazy loading in the controller/view layer will fail.
// The test failures ARE the migration checklist.
@SpringBootTest
@TestPropertySource(properties = "spring.jpa.open-in-view=false")
class OrderControllerTest {
@Autowired private MockMvc mockMvc;
@Test
void getOrder_returnsItemsWithoutOsiv() throws Exception {
mockMvc.perform(get("/api/orders/1"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.items").isArray())
.andExpect(jsonPath("$.items.length()").value(greaterThan(0)));
// If this fails with LazyInitializationException,
// the service layer needs a JOIN FETCH or EntityGraph.
}
}
// Phase 4: Disable OSIV (1 hour)
spring:
jpa:
open-in-view: false
// Phase 5: Monitor (ongoing)
// Watch HikariCP metrics. Connection usage P99 should drop
// from seconds to milliseconds. Pending should stay at zero.
// If any endpoint throws LazyInitializationException in production,
// the test from Phase 3 missed it — fix and add the test. Phase 3 is the critical innovation. By writing integration tests with @TestPropertySource(properties = "spring.jpa.open-in-view=false"), you run your entire test suite with OSIV disabled. Every test that fails with LazyInitializationException identifies an endpoint that needs migration. The test failures are the task list. No manual audit of every controller and service class required.
The timeline depends on codebase size. For a 50-endpoint application with good test coverage, the entire migration takes 3-5 days. For a 500-endpoint application with sparse test coverage, it takes 2-4 weeks. Most of that time is spent writing the tests in Phase 3, not fixing the lazy loading in Phase 2. The testing work has value beyond the OSIV migration — you end up with integration tests for every endpoint, which is a worthwhile investment in its own right.
A brief and slightly exasperated history
// A brief and slightly exasperated history of OSIV in Spring Boot:
//
// 2014: Spring Boot 1.0 ships with open-in-view=true by default.
// No warning. No documentation of the connection-holding behavior.
// The "it just works" philosophy applied to lazy loading.
//
// 2016: Vlad Mihalcea (Hibernate developer advocate) publishes
// "The Open Session in View Anti-Pattern" — the definitive
// takedown of OSIV. Thousands of developers read it.
// Spring Boot's default does not change.
//
// 2018: Spring Boot 2.0 adds the WARN log message at startup.
// The warning says "database queries may be performed during
// view rendering." It does NOT mention connection holding,
// pool exhaustion, or idle-in-transaction state.
// The default remains true.
//
// 2023: Spring Boot 3.x. The warning still appears. The default
// is still true. The GitHub issue requesting a default change
// (spring-projects/spring-boot#7107) has 100+ thumbs-up
// and remains open.
//
// 2026: You are reading this article. The default is still true.
// It will probably still be true when your grandchildren
// inherit your Spring Boot applications. The GitHub issue requesting a default change has been open since 2016. The Spring team's position is that changing the default would break too many existing applications. This is probably true. It is also the reason the warning exists — an acknowledgment that the default is wrong, paired with a decision not to fix it.
Vlad Mihalcea's "The Open Session in View Anti-Pattern" remains the definitive reference on why OSIV is harmful. The Spring Boot documentation itself now includes a brief note acknowledging the performance implications. Neither has changed the default.
The practical outcome is that every new Spring Boot project starts with OSIV enabled. The developer does not know this. The warning scrolls past in the startup log. The application works fine in development, where there are no concurrent requests and external API calls return in 50ms. It works fine in staging, where there are 5 concurrent users. It fails in production, during the first traffic spike that pushes concurrent requests above the pool size.
By the time the pool exhaustion occurs, the application has been in production for months. The OSIV dependency is baked into dozens of endpoints. The fix requires touching every controller and service method that relies on lazy loading. The team increases the pool size instead. The problem returns at the next traffic threshold.
I find this pattern — a known-harmful default preserved for backward compatibility, with a warning that describes the symptom rather than the cause — to be the infrastructural equivalent of a fire alarm that whispers. One appreciates the intent. One questions the execution.
A note on Spring WebFlux and reactive alternatives
Teams sometimes ask whether migrating to Spring WebFlux solves the OSIV problem. The answer is yes, but not for the reason they hope.
// A note on Spring WebFlux and OSIV
// If you have migrated to Spring WebFlux (reactive), OSIV does not apply.
// WebFlux does not use the Servlet API, so OpenEntityManagerInViewFilter
// and OpenEntityManagerInViewInterceptor are never registered.
// However, WebFlux introduces its own connection-holding patterns:
// Reactive JPA (Hibernate Reactive):
@Transactional
public Mono<OrderResponse> getOrder(Long id) {
return orderRepository.findById(id)
.flatMap(order -> {
// If this calls an external API, the reactive chain
// does NOT hold a JDBC connection while waiting.
// Reactive is non-blocking — the thread is released.
return paymentClient.getStatus(order.getPaymentId())
.map(status -> new OrderResponse(order, status));
});
}
// R2DBC (reactive database driver):
// No Hibernate Session. No OSIV. No lazy loading at all.
// Connections are acquired per-query and released immediately.
// This is the reactive model's natural advantage for connection efficiency.
// The catch: Hibernate Reactive is young, limited, and lacks
// many features of traditional Hibernate. R2DBC requires rewriting
// your data layer. Neither is a quick fix for an OSIV problem.
//
// If you are on Spring MVC (the vast majority of Spring Boot apps),
// the fix is spring.jpa.open-in-view=false + explicit fetching.
// Migration to WebFlux/R2DBC is a separate, larger decision
// that should not be motivated solely by connection pool issues. WebFlux solves the OSIV problem by not having it — the reactive stack does not use servlets, so the servlet-based OSIV interceptor never registers. But WebFlux introduces its own complexity: reactive repositories, different testing patterns, a fundamentally different programming model, and a data access ecosystem (R2DBC, Hibernate Reactive) that is younger and less battle-tested than traditional JPA.
If you are considering WebFlux because of connection pool exhaustion, I would gently suggest that disabling OSIV with spring.jpa.open-in-view=false is a one-line change, while migrating to WebFlux is a rewrite. Solve the connection problem first. Evaluate WebFlux on its own merits — non-blocking I/O, higher concurrency per thread, better resource utilization under high-concurrency workloads — not as a workaround for a configuration default.
Where Gold Lapel helps, and where it honestly does not
Gold Lapel provides connection pooling as a self-optimizing PostgreSQL proxy. It sits between your application and PostgreSQL, multiplexing application connections onto a smaller set of backend connections. This does provide relief for some OSIV symptoms — but intellectual honesty requires acknowledging the limits.
# Gold Lapel's connection pooling absorbs SOME of the OSIV pressure.
# But let us be honest about what a proxy can and cannot fix.
# Without Gold Lapel (direct to PostgreSQL):
# - 10 app connections held idle-in-transaction by OSIV
# - 10 PostgreSQL backend connections occupied doing nothing
# - 11th request waits 30 seconds, then times out
# - PostgreSQL max_connections consumed by idle sessions
# With Gold Lapel:
spring:
datasource:
url: jdbc:postgresql://localhost:6432/mydb # GL proxy port
hikari:
maximum-pool-size: 20 # can be larger — connections go to GL, not Postgres
connection-timeout: 5000
# - 20 app connections to Gold Lapel (lightweight, multiplexed)
# - Gold Lapel maintains 10-20 backend connections to PostgreSQL
# - OSIV still holds app-side connections for full request duration
# - BUT those are connections to GL, not to PostgreSQL directly
# - PostgreSQL sees only GL's managed backend pool
# - max_connections pressure is eliminated
#
# What GL DOES fix:
# - PostgreSQL connection count stays bounded regardless of app behavior
# - Autoscaling events don't create connection storms
# - Backend connections are recycled efficiently
# - Monitoring shows clean pool utilization on the Postgres side
#
# What GL CANNOT fix:
# - Your app threads are still blocked waiting for connections
# - HikariCP pool is still exhausted at the application layer
# - Latency from pool contention is unchanged
# - The fundamental problem — holding connections during HTTP calls — persists
#
# The real fix is disabling OSIV. Gold Lapel buys you breathing room,
# not absolution. Here is what Gold Lapel changes in an OSIV scenario. Your application opens 20 connections to Gold Lapel instead of 10 to PostgreSQL directly. Gold Lapel maintains its own pool of 10-20 backend connections to PostgreSQL. The critical difference: the 20 connections your application holds idle-in-transaction are connections to the proxy, not to PostgreSQL. PostgreSQL sees only Gold Lapel's managed backend pool. Your max_connections headroom is preserved. VACUUM is not impacted by application-side session duration. Multiple application instances can scale without coordinating pool sizes.
That is real value, especially in autoscaling environments where the connection arithmetic becomes untenable. But the application-layer problem persists. Your HikariCP pool is still exhausted. Your threads are still blocked. Your users are still waiting. Gold Lapel absorbs the PostgreSQL-side damage but cannot fix the fundamental issue of holding connections during non-database work.
The real fix is disabling OSIV. Set spring.jpa.open-in-view=false. Refactor your lazy loading to use explicit fetches. Narrow your @Transactional boundaries. The connection pool problem vanishes — not because the pool is bigger, not because a proxy is absorbing the pressure, but because connections are held for 5 milliseconds instead of 500.
Gold Lapel makes your PostgreSQL connection management better regardless of whether OSIV is enabled. But if you are reading this article because your pool is exhausted, the answer is not more connections, not a bigger pool, and not a proxy. The answer is one line of configuration and the discipline to fetch your data explicitly.
Your pool will thank you. Your database will thank you. Your users — who have been staring at 30-second timeouts during every traffic spike — will thank you most of all.
Frequently asked questions
Terms referenced in this article
If you'll permit me a further observation — pool exhaustion often shares a stage with connection overhead. I have prepared a connection pooling benchmark that quantifies the cost of establishing versus reusing connections, so you can see in numbers what you have just felt in latency.