Lock contention
When multiple transactions compete for the same lock, forcing some to wait. The invisible bottleneck — latency climbs, but your CPU and I/O graphs have nothing to report.
Lock contention occurs when one transaction holds a lock that another transaction needs, forcing the second to wait — two members of staff reaching for the same key at the same time. PostgreSQL uses locks at several granularities — row-level, table-level, and lightweight internal locks — to prevent concurrent transactions from corrupting data. When contention is low, these waits are imperceptible. When it is high, queries that normally complete in milliseconds sit idle for seconds, and the symptom is the sort that tries one's patience: latency increases sharply, but CPU utilization and disk I/O remain perfectly flat. The database is not doing work. It is waiting. And waiting, I should note, is not a diagnosis one finds in the monitoring dashboard.
What lock contention is
PostgreSQL acquires locks implicitly on every data-modifying statement. An UPDATE takes a row-level exclusive lock on each row it modifies. A SELECT takes an AccessShareLock on the table. An ALTER TABLE takes an AccessExclusiveLock that blocks everything, including reads.
These locks are organized by type and compatibility. Two transactions can both hold AccessShareLock on the same table simultaneously — reads do not block reads. But two transactions cannot both hold an exclusive lock on the same row. The second one blocks until the first commits or rolls back.
Lock contention is what happens when these waits become significant. It is not a configuration parameter or a threshold. It is a runtime condition: transactions are spending a meaningful fraction of their time waiting for locks instead of doing useful work. If I may put it plainly — your database is queuing at the door.
The lock types that matter most in practice:
- Row-level locks — acquired by
UPDATE,DELETE, andSELECT FOR UPDATE. The most common source of contention. Two transactions updating the same row will serialize. - Table-level locks — acquired implicitly by DML (in weak modes) and explicitly by DDL (in strong modes). An
ALTER TABLEblocks all concurrent DML on that table. - Lightweight locks (LWLocks) — internal locks on shared data structures like the buffer pool, WAL buffers, and relation extension. You cannot control these directly, but you can see them in wait events.
- Advisory locks — application-defined locks with no automatic semantics. Used to serialize access to logical resources without locking actual rows.
Why it matters
Lock contention is among the more inconsiderate performance problems, because it hides. The usual indicators stay quiet. CPU utilization does not spike — the waiting transaction is not consuming compute. Disk I/O does not increase — the waiting transaction is not reading or writing. The database looks idle at the resource level while response times climb. I have seen teams spend days investigating application code, network latency, and connection pooling before anyone thinks to check whether transactions are simply standing in line.
The effects compound quickly:
- Cascading waits — transaction A holds a lock, transactions B through F queue behind it. If A takes 2 seconds to commit, all five waiting transactions are delayed by at least 2 seconds, regardless of how fast their own work is. Total user-facing latency grows multiplicatively.
- Connection exhaustion — each waiting transaction holds an open database connection. Under heavy contention, the connection pool fills with idle-in-transaction sessions, and new requests cannot get a connection at all.
- Deadlocks — when two transactions wait for each other's locks in a circular dependency, neither can proceed. PostgreSQL detects deadlocks automatically and cancels one transaction, but the canceled transaction must be retried, and the detection itself has a cost (the
deadlock_timeoutdelay, default 1 second). - Autovacuum interference — autovacuum can be blocked by conflicting locks, causing dead tuples to accumulate. The resulting table bloat makes subsequent queries slower, which makes transactions hold locks longer, which increases contention further.
Diagnosing lock contention
Allow me to introduce you to three views that, together, answer the only questions that matter: who is waiting, what they are waiting for, and who is holding the lock they need.
pg_stat_activity: who is waiting
The wait_event_type and wait_event columns in pg_stat_activity reveal what each backend is doing right now. When wait_event_type is Lock, the session is blocked by another transaction.
-- Find sessions currently waiting on locks
SELECT
pid,
wait_event_type,
wait_event,
state,
now() - query_start AS duration,
left(query, 80) AS query
FROM pg_stat_activity
WHERE wait_event_type = 'Lock'
ORDER BY query_start; pg_locks: what locks exist
The pg_locks view shows every lock held or awaited by every backend. The granted column distinguishes between locks that are held (true) and locks that are being waited for (false). Joining with pg_stat_activity gives you the query behind each lock.
-- View all active locks and what they are waiting for
SELECT
l.pid,
l.locktype,
l.mode,
l.granted,
l.relation::regclass AS table_name,
a.state,
left(a.query, 80) AS query
FROM pg_locks l
JOIN pg_stat_activity a ON a.pid = l.pid
WHERE NOT l.granted
ORDER BY a.query_start; Finding the blocker
The most useful diagnostic query joins pg_locks against itself to match each blocked session with the session that is blocking it. This tells you the specific transaction you need to have a word with — or, if the situation warrants, terminate.
-- Find blocked queries and what is blocking them
SELECT
blocked.pid AS blocked_pid,
blocked.query AS blocked_query,
now() - blocked.query_start AS blocked_duration,
blocker.pid AS blocker_pid,
blocker.query AS blocker_query,
now() - blocker.query_start AS blocker_duration
FROM pg_stat_activity blocked
JOIN pg_locks blocked_locks ON blocked_locks.pid = blocked.pid AND NOT blocked_locks.granted
JOIN pg_locks blocker_locks ON blocker_locks.pid != blocked.pid
AND blocker_locks.locktype = blocked_locks.locktype
AND blocker_locks.database IS NOT DISTINCT FROM blocked_locks.database
AND blocker_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
AND blocker_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocker_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
AND blocker_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
AND blocker_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
AND blocker_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
AND blocker_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocker_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
AND blocker_locks.granted
JOIN pg_stat_activity blocker ON blocker.pid = blocker_locks.pid; In PostgreSQL 14+, the pg_blocking_pids() function simplifies this: SELECT pg_blocking_pids(blocked_pid) returns the PIDs holding conflicting locks.
Common causes
Lock contention is almost always a consequence of application behaviour, not database configuration. The database is rarely at fault. The instructions it receives, however, are another matter entirely.
Long transactions holding locks
The most common cause, and — if you'll forgive the candour — the most inconsiderate. A transaction acquires a lock early (an UPDATE, a SELECT FOR UPDATE) and then does something slow before committing — an external API call, a complex computation, or simply running many queries within a single transaction. Every other transaction that needs that lock waits for the entire duration. It is the equivalent of locking a door and then wandering off to make a phone call while a queue forms in the corridor.
-- WRONG: long transaction holds locks across external calls
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- ... call external payment API (500ms-5s) ...
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;
-- RIGHT: minimize lock duration
-- 1. Do external work first, outside a transaction
-- 2. Then acquire locks and commit quickly
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT; Hot rows
A single row that many transactions update concurrently — a counter, a status field, a balance. Each update takes an exclusive row lock, so updates serialize. At 100 concurrent requests, throughput for that row is 1/100th of what it could be. One row, serving as the sole bottleneck for an entire application. The fix is to restructure: use a separate counters table, batch updates, or move the contended value out of the hot path entirely.
DDL blocking DML
Schema changes like ALTER TABLE acquire an AccessExclusiveLock, which is incompatible with every other lock mode — including the humble AccessShareLock that SELECT acquires. On a busy table, an ALTER TABLE can queue behind dozens of active transactions, and once it starts waiting, all new transactions queue behind it. The result is a total stall on the table until the DDL completes. I have seen a well-intentioned ALTER TABLE ADD COLUMN bring a production service to its knees for minutes. The column itself took milliseconds. The waiting took considerably longer.
-- Set a timeout so queries don't wait forever for locks
SET lock_timeout = '5s';
-- For DDL operations that should not block DML for long
SET lock_timeout = '3s';
ALTER TABLE orders ADD COLUMN notes text;
-- If the lock is not acquired within 3 seconds, the ALTER fails
-- instead of blocking all writes indefinitely Explicit LOCK TABLE
Some applications use LOCK TABLE ... IN EXCLUSIVE MODE to enforce serial access. This is almost always unnecessary — row-level locking handles concurrent access correctly. Explicit table locks should be a last resort, used only when you genuinely need to prevent all concurrent writes to a table for the duration of a transaction.
Reducing lock contention
Shorter transactions
The single most effective change, and the one I would attend to first. Move anything that is not a database operation out of the transaction. Do not hold a transaction open while calling an external service, rendering a template, or processing business logic. Acquire the lock, do the database work, commit. The shorter the window between lock acquisition and commit, the less time other transactions spend waiting. Brevity, in this context, is not merely a virtue — it is the remedy.
Advisory locks
When you need to serialize access to a logical resource — a user account, an order, a processing pipeline — advisory locks let you do so without locking actual rows. They are application-defined: you choose a numeric key, and PostgreSQL provides mutual exclusion on that key. No table rows are locked, and no row-level contention is created.
-- Use advisory locks to serialize access to a logical resource
-- without locking actual rows
-- Session-level advisory lock (released at end of session)
SELECT pg_advisory_lock(12345);
-- ... do work ...
SELECT pg_advisory_unlock(12345);
-- Transaction-level advisory lock (released at end of transaction)
BEGIN;
SELECT pg_advisory_xact_lock(12345);
-- ... do work ...
COMMIT; -- lock released automatically
-- Non-blocking: try to acquire, skip if already held
SELECT pg_try_advisory_lock(12345); -- returns false if lock not acquired SKIP LOCKED
For queue-like patterns where multiple workers process rows from the same table, SKIP LOCKED lets each worker take the next available unlocked row instead of waiting for a specific one. No worker blocks another. This is the standard approach for implementing job queues in PostgreSQL, and it is — if you'll permit me a moment of appreciation — an exceptionally elegant piece of design. The locked row is not contested; it is simply stepped past.
-- SKIP LOCKED: process only rows that are not locked
-- Ideal for job queues and task-processing patterns
-- Worker picks up the next available task
UPDATE tasks
SET status = 'processing', worker_id = 'worker-1'
WHERE id = (
SELECT id FROM tasks
WHERE status = 'pending'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 1
)
RETURNING *;
-- Multiple workers can run this concurrently
-- without blocking each other — each gets a different row Optimistic locking
Instead of locking a row before modifying it, read the row (with a version number or timestamp), do your work, then update with a WHERE clause that checks the version has not changed. If it has, retry. This avoids holding locks during slow operations and works well when collisions are rare. Most ORMs support this pattern natively.
Reducing unnecessary indexes
Every index on a table must be updated when a row is inserted or updated. Each index update acquires its own internal locks. A table with 10 indexes creates 10 times the internal lock traffic of a table with one. Dropping unused indexes reduces write-path contention and also reduces the window during which row locks are held, since the transaction completes faster.
How Gold Lapel relates
Gold Lapel sits at the proxy layer and observes every query as it passes through. When query latency increases without a corresponding increase in execution time at the PostgreSQL level, the discrepancy points to wait time — and lock contention is a primary cause. This is precisely the kind of hidden problem I find most worth diagnosing.
Gold Lapel detects these lock-related latency patterns by correlating proxy-observed response times with query execution characteristics. A query that normally completes in 2ms but occasionally takes 3 seconds, with no change in its execution plan or I/O profile, is exhibiting contention. Gold Lapel surfaces these patterns so you can trace them back to the holding transaction and restructure accordingly — before the queue at the door becomes a queue out the door.