PostgreSQL LISTEN/NOTIFY, SKIP LOCKED, and UNLOGGED Tables as Redis Alternatives
You arrived with a Redis dependency and a PostgreSQL instance that has been quietly capable of handling the work itself. Allow me to make the introductions.
Three patterns, zero additional services
Redis is commonly deployed alongside PostgreSQL for three categories of work: pub/sub messaging, job queues, and ephemeral storage. Each addresses a legitimate need — and Redis handles them well. But PostgreSQL has native features that cover all three, and most teams never evaluate them. This is not a failure of diligence. The tutorials never mention these features. The conference talks skip over them. I am here to fill the gap.
What follows are working implementations of each pattern — SQL and application code you can run today.
The three PostgreSQL features in question:
- LISTEN/NOTIFY — asynchronous pub/sub messaging through the database connection
- SKIP LOCKED — atomic row claiming for concurrent job processing
- UNLOGGED tables — write-optimized tables that skip the write-ahead log
For the strategic overview of when and why to consolidate on PostgreSQL, see PostgreSQL Caching Without Redis. For the full thesis, see You Don't Need Redis. This article is the implementation companion — less philosophy, more working code.
LISTEN/NOTIFY — Pub/Sub in PostgreSQL
How it works
PostgreSQL has included a built-in pub/sub mechanism since version 9.0 — through two commands that most teams have never encountered (see the LISTEN and NOTIFY documentation):
NOTIFY channel, 'payload'sends a message to all connections currently listening on that channel.LISTEN channelsubscribes a connection to receive notifications on that channel.
Messages are delivered asynchronously through the PostgreSQL connection itself — no additional protocol, no separate service. When a NOTIFY fires (either directly or inside a trigger), every connection that has issued LISTEN on that channel receives the notification. The infrastructure you already operate. Doing work you were already paying for.
A few characteristics worth noting:
- Payload limit: 8,000 bytes per notification.
- No persistence: Messages are not stored. If no one is listening, the notification is discarded.
- No replay: There is no message history. A subscriber that connects after a notification was sent will never see it.
- Transaction-aware:
NOTIFYinside a transaction is delivered only when the transaction commits. If it rolls back, the notification is never sent.
Implementation: cache invalidation
The most natural use of LISTEN/NOTIFY is cache invalidation — notifying application instances when data changes so they can refresh their local caches. If you have ever written a TTL-based cache and thought "I wish the cache just knew when the data changed," this is that.
CREATE TABLE products (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name TEXT NOT NULL,
price NUMERIC(10, 2) NOT NULL,
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
); Create a trigger function that fires a notification with the operation type and affected row ID:
CREATE OR REPLACE FUNCTION notify_product_change()
RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify(
'product_changes',
json_build_object(
'op', TG_OP,
'id', COALESCE(NEW.id, OLD.id)
)::text
);
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER product_change_trigger
AFTER INSERT OR UPDATE OR DELETE ON products
FOR EACH ROW
EXECUTE FUNCTION notify_product_change(); Now, any insert, update, or delete on the products table sends a JSON payload like {"op": "UPDATE", "id": 42} to every connection listening on the product_changes channel. The application receives this and knows exactly which product to evict from its local cache. No polling. No TTL guessing. The database tells you when the data changed, because it is the one who changed it.
Implementation: real-time dashboard updates
A second pattern worth your attention: pushing live updates to a dashboard. Consider an orders table where new orders should appear on a monitoring dashboard in real time:
CREATE TABLE orders (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
customer_id BIGINT NOT NULL,
total NUMERIC(12, 2) NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE OR REPLACE FUNCTION notify_new_order()
RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify(
'new_orders',
json_build_object(
'id', NEW.id,
'customer_id', NEW.customer_id,
'total', NEW.total,
'status', NEW.status
)::text
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER new_order_trigger
AFTER INSERT ON orders
FOR EACH ROW
EXECUTE FUNCTION notify_new_order(); The application-side listener maintains a long-lived connection to PostgreSQL, waits for notifications, and forwards them to the dashboard via WebSocket, SSE, or whatever transport your frontend uses. The database becomes the event source. No message broker in between.
Listener patterns by language
Each language ecosystem has its own way of consuming PostgreSQL notifications. The critical listener setup for the most common ones:
Python (psycopg3):
import psycopg
conn = psycopg.connect("postgresql://localhost/mydb", autocommit=True)
conn.execute("LISTEN product_changes")
for notify in conn.notifies():
print(f"Channel: {notify.channel}, Payload: {notify.payload}") Node.js (pg):
const { Client } = require('pg');
const client = new Client({ connectionString: 'postgresql://localhost/mydb' });
await client.connect();
await client.query('LISTEN product_changes');
client.on('notification', (msg) => {
console.log(`Channel: ${msg.channel}, Payload: ${msg.payload}`);
}); Go (pgx):
conn, _ := pgx.Connect(context.Background(), "postgresql://localhost/mydb")
defer conn.Close(context.Background())
_, _ = conn.Exec(context.Background(), "LISTEN product_changes")
for {
notification, _ := conn.WaitForNotification(context.Background())
fmt.Printf("Channel: %s, Payload: %s\n", notification.Channel, notification.Payload)
} Ruby (pg gem):
conn = PG.connect("postgresql://localhost/mydb")
conn.exec("LISTEN product_changes")
loop do
conn.wait_for_notify do |channel, pid, payload|
puts "Channel: #{channel}, Payload: #{payload}"
end
end For framework-specific integrations, see Symfony Messenger with PostgreSQL LISTEN/NOTIFY and the LISTEN/NOTIFY chapter in PostgreSQL Cache Invalidation.
Production considerations
I should be forthcoming about what LISTEN/NOTIFY requires in production, because pretending it is entirely carefree would be a disservice.
Connection management. The listening connection must stay open for the duration of the subscription. This means a dedicated connection — not one borrowed from a connection pool. If you are using PgBouncer in transaction mode, the LISTEN state is lost when the connection is returned to the pool. Use a direct, non-pooled connection for listeners.
Missed messages. If the listener disconnects — network blip, application restart, OOM kill — any notifications sent during the gap are lost. There is no catch-up mechanism built into LISTEN/NOTIFY.
Recovery pattern. On reconnect, perform a full cache rebuild or query for changes since the last known state. Track a last_seen_at timestamp and, on reconnect, query for all rows modified after that timestamp.
Throughput. The NOTIFY call itself adds minimal overhead to the triggering transaction. The cost is on the listener side: each listening connection processes notifications sequentially. If a listener cannot keep up with the notification rate, messages queue in memory on the PostgreSQL backend.
Monitoring. Use pg_stat_activity to verify that listener connections are alive:
SELECT pid, state, query
FROM pg_stat_activity
WHERE query LIKE '%LISTEN%'; When to use something else
LISTEN/NOTIFY is a pragmatic choice for moderate-throughput pub/sub within a single PostgreSQL cluster. It is not the right tool in every scenario:
- Guaranteed delivery required. If a missed message means data loss or broken business logic, you need a message queue with persistence and acknowledgment — RabbitMQ or Amazon SQS are well-suited for this.
- Message replay or audit trail. If consumers need to reprocess past messages, you need an event log. Kafka excels here, as does PostgreSQL's own logical replication via the WAL.
- Thousands of messages per second, sustained. PostgreSQL handles bursts well, but sustained high-throughput pub/sub is better served by dedicated infrastructure purpose-built for that workload.
- Fan-out to hundreds of subscribers. Each subscriber requires its own database connection. At that scale, a message broker is the more sensible choice.
SKIP LOCKED — Job queues in PostgreSQL
How it works
If you'll permit me a moment of enthusiasm: SELECT ... FOR UPDATE SKIP LOCKED is one of PostgreSQL's most elegant features, and it is the foundation of database-native job queues. It does two things atomically:
- Locks the selected rows (like
FOR UPDATE). - Skips any rows that are already locked by another transaction (instead of waiting for them).
When multiple workers execute the same claim query simultaneously, each one gets a different row — no race conditions, no duplicate processing, no external locking service. The row lock is held for the duration of the transaction: when the transaction commits or rolls back, the lock is released and the row becomes available again. PostgreSQL's own transaction machinery provides the guarantees that Redis-based queues build on top of application logic.
Implementation: a complete job queue
CREATE TABLE jobs (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
queue TEXT NOT NULL DEFAULT 'default',
payload JSONB NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
locked_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
attempts INT NOT NULL DEFAULT 0,
max_attempts INT NOT NULL DEFAULT 3
);
CREATE INDEX idx_jobs_claimable
ON jobs (queue, created_at)
WHERE status = 'pending'; I should draw your attention to the partial index on status = 'pending'. It is doing the most important work in this schema. Without it, the claim query scans every row in the table — including completed and failed jobs that are of no interest. With it, the index shrinks as jobs are processed, keeping the claim query fast regardless of table size.
Enqueue a job:
INSERT INTO jobs (queue, payload)
VALUES ('emails', '{"to": "user@example.com", "subject": "Welcome", "body": "..."}'); Claim the next available job:
BEGIN;
SELECT id, queue, payload, attempts
FROM jobs
WHERE queue = 'emails'
AND status = 'pending'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 1; One query. Atomic claim, atomic lock, no race condition. If another worker is already processing a job, it is skipped. If no jobs are available, the query returns zero rows — no error, no blocking, no contention.
Mark as in-progress (within the same transaction):
UPDATE jobs
SET status = 'processing',
locked_at = now(),
attempts = attempts + 1
WHERE id = $1; Complete the job:
UPDATE jobs
SET status = 'completed',
completed_at = now()
WHERE id = $1;
COMMIT; Handle failure:
UPDATE jobs
SET status = CASE
WHEN attempts >= max_attempts THEN 'failed'
ELSE 'pending'
END,
locked_at = NULL
WHERE id = $1;
COMMIT; If the job has not exhausted its retry limit, it returns to pending and will be claimed again by the next available worker. If retries are exhausted, it moves to failed for manual review or dead-letter handling.
Priority queues and delayed jobs
ALTER TABLE jobs ADD COLUMN priority INT NOT NULL DEFAULT 0;
-- Claim with priority (lower number = higher priority):
SELECT id, queue, payload, attempts
FROM jobs
WHERE queue = 'emails'
AND status = 'pending'
ORDER BY priority, created_at
FOR UPDATE SKIP LOCKED
LIMIT 1;
-- Updated index:
DROP INDEX idx_jobs_claimable;
CREATE INDEX idx_jobs_claimable
ON jobs (queue, priority, created_at)
WHERE status = 'pending'; ALTER TABLE jobs ADD COLUMN run_after TIMESTAMPTZ NOT NULL DEFAULT now();
-- Claim only jobs whose run_after has passed:
SELECT id, queue, payload, attempts
FROM jobs
WHERE queue = 'emails'
AND status = 'pending'
AND run_after <= now()
ORDER BY priority, created_at
FOR UPDATE SKIP LOCKED
LIMIT 1; When the queue is empty, workers need a strategy for waiting. Two approaches:
- Polling: Sleep for a fixed interval (1 second, say) and retry the claim query. Simple and reliable.
- LISTEN/NOTIFY: Have the enqueue operation fire a
NOTIFYon ajobs_availablechannel. WorkersLISTENand wake immediately when a new job arrives.
The second approach combines two of the patterns in this article. I do enjoy it when the tools complement each other.
Batch processing
For high-throughput queues where per-job transaction overhead is the bottleneck, claim multiple jobs at once:
BEGIN;
SELECT id, queue, payload, attempts
FROM jobs
WHERE queue = 'bulk_imports'
AND status = 'pending'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 50; The worker processes all 50 jobs within the same transaction. This is appropriate when individual job processing is fast, the cost of per-job transaction overhead is significant, and partial failure of the batch is acceptable.
Cleanup and maintenance
A well-maintained household requires regular attention to its records. Job tables accumulate rows, and without cleanup, the table grows indefinitely — eventually degrading performance even with the partial index.
-- Archive and delete in one atomic operation:
WITH moved AS (
DELETE FROM jobs
WHERE status IN ('completed', 'failed')
AND completed_at < now() - INTERVAL '30 days'
RETURNING *
)
INSERT INTO jobs_archive SELECT * FROM moved;
-- Or automate with pg_cron:
SELECT cron.schedule(
'archive-old-jobs',
'0 3 * * *',
$$
WITH moved AS (
DELETE FROM jobs
WHERE status IN ('completed', 'failed')
AND completed_at < now() - INTERVAL '30 days'
RETURNING *
)
INSERT INTO jobs_archive SELECT * FROM moved;
$$
); Job tables have high INSERT/DELETE churn — they accumulate dead tuples quickly. The default autovacuum settings may not keep pace:
ALTER TABLE jobs SET (
autovacuum_vacuum_scale_factor = 0.01,
autovacuum_analyze_scale_factor = 0.005
); For more detail, see Autovacuum Tuning.
Production considerations
Worker crash recovery. If a worker crashes or its connection drops, PostgreSQL releases the row lock automatically. The job becomes claimable by the next worker. This is not a feature you build — it is how PostgreSQL transactions work.
Idempotency. Because a crashed worker may have partially completed work before the transaction rolled back, jobs should be designed to be safe to retry. If the work has side effects that cannot be repeated — charging a credit card, for instance — use an idempotency key stored in the job payload and check it before processing.
Throughput. A standard PostgreSQL instance handles hundreds of jobs per second through this pattern. With connection pooling and properly tuned indexes, thousands per second are achievable.
Monitoring:
-- Pending job count by queue
SELECT queue, count(*) FROM jobs WHERE status = 'pending' GROUP BY queue;
-- Average time from creation to completion
SELECT queue, avg(completed_at - created_at) AS avg_duration
FROM jobs WHERE status = 'completed' AND completed_at > now() - INTERVAL '1 hour'
GROUP BY queue;
-- Failed job count
SELECT queue, count(*) FROM jobs WHERE status = 'failed' GROUP BY queue; When to use something else
- Sustained high throughput (thousands of jobs per second). Dedicated queue systems like Sidekiq, Celery, or Amazon SQS are purpose-built and more efficient at this scale.
- Complex routing, dead-letter queues, backpressure. Framework-level queue systems (Sidekiq Pro, Laravel Horizon, Bull) provide these features out of the box.
- Cross-service job distribution. If jobs are produced by one service and consumed by another, a message broker provides better decoupling than a shared database table.
For framework-specific implementations, see Rails Solid Queue with PostgreSQL and Rails Connection Pool Sizing.
UNLOGGED tables — Ephemeral storage without the WAL
How it works
An UNLOGGED table is created with a single keyword — and that keyword changes the table's entire relationship with durability. UNLOGGED tables skip the write-ahead log (WAL), which produces two consequences:
- Writes are faster. The WAL is PostgreSQL's durability mechanism — every write to a regular table is first written to the WAL, then to the data file. Skipping the WAL eliminates this double-write overhead.
- Data is not crash-safe. If PostgreSQL shuts down uncleanly (power loss, OOM kill, kernel panic), UNLOGGED tables are truncated on recovery. The data is gone.
Additionally, because replication in PostgreSQL is WAL-based, UNLOGGED tables are invisible to read replicas. Everything else works normally. Indexes, constraints, foreign keys, triggers, permissions, query planning — all identical to a regular table. It is still PostgreSQL. It simply stopped writing to its diary.
Implementation: session storage
Session data is a natural fit for UNLOGGED tables. Sessions are ephemeral by nature — they expire, they can be regenerated by asking the user to log in again, and the cost of losing them on a server crash is a brief inconvenience rather than data loss.
CREATE UNLOGGED TABLE sessions (
session_id TEXT PRIMARY KEY,
user_id BIGINT NOT NULL,
data JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
expires_at TIMESTAMPTZ NOT NULL DEFAULT now() + INTERVAL '24 hours'
);
CREATE INDEX idx_sessions_expires ON sessions (expires_at); Write (upsert):
INSERT INTO sessions (session_id, user_id, data, expires_at)
VALUES ($1, $2, $3, now() + INTERVAL '24 hours')
ON CONFLICT (session_id) DO UPDATE
SET data = EXCLUDED.data,
expires_at = now() + INTERVAL '24 hours'; Read:
SELECT data
FROM sessions
WHERE session_id = $1
AND expires_at > now(); Cleanup expired sessions:
DELETE FROM sessions WHERE expires_at < now();
-- Automate with pg_cron:
SELECT cron.schedule(
'cleanup-expired-sessions',
'*/5 * * * *',
'DELETE FROM sessions WHERE expires_at < now()'
); Implementation: rate limiting
Rate limiting tracks request counts over sliding time windows. The data is inherently ephemeral — losing it on a crash simply resets all rate limits, which is an entirely acceptable outcome. PostgreSQL's ON CONFLICT clause makes this pattern elegant:
CREATE UNLOGGED TABLE rate_limits (
key TEXT PRIMARY KEY,
count INT NOT NULL DEFAULT 1,
window_start TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Atomic increment with window reset:
INSERT INTO rate_limits (key, count, window_start)
VALUES ($1, 1, now())
ON CONFLICT (key) DO UPDATE
SET count = CASE
WHEN rate_limits.window_start < now() - INTERVAL '1 minute'
THEN 1
ELSE rate_limits.count + 1
END,
window_start = CASE
WHEN rate_limits.window_start < now() - INTERVAL '1 minute'
THEN now()
ELSE rate_limits.window_start
END;
-- Check the current count:
SELECT count
FROM rate_limits
WHERE key = $1
AND window_start > now() - INTERVAL '1 minute'; One statement. Three cases handled atomically: first request inserts a new row, subsequent requests within the window increment the count, and the first request after window expiry resets the count and starts a new window.
Implementation: temporary cache table
For data that is expensive to compute but acceptable to lose, an UNLOGGED cache table is a sensible arrangement:
CREATE UNLOGGED TABLE cache (
key TEXT PRIMARY KEY,
value JSONB NOT NULL,
expires_at TIMESTAMPTZ NOT NULL
);
-- Write-through:
INSERT INTO cache (key, value, expires_at)
VALUES ('user:42:profile', $1, now() + INTERVAL '1 hour')
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value,
expires_at = now() + INTERVAL '1 hour';
-- Read with fallback:
SELECT value FROM cache WHERE key = $1 AND expires_at > now();
-- TTL enforcement:
DELETE FROM cache WHERE expires_at < now(); This pattern provides a simple key-value cache backed by PostgreSQL, without the WAL overhead. I should be clear: it is not a replacement for Redis in latency-sensitive scenarios — Redis is genuinely faster for pure in-memory key-value access. But for reducing repeated expensive queries within your existing infrastructure, this approach is effective and requires no additional services.
Performance characteristics
- Write speed: 2-3x faster than logged tables for INSERT-heavy workloads.
- Read speed: Identical to logged tables. Reads do not touch the WAL.
- When the performance gain matters: High write throughput on data that is inherently ephemeral — session stores, rate limiters, scratch tables for ETL, temporary aggregation results.
- When it does not matter: Read-heavy workloads see zero benefit from UNLOGGED.
Converting between logged and unlogged
-- Make an existing table unlogged
ALTER TABLE sessions SET UNLOGGED;
-- Convert back to a regular (logged) table
ALTER TABLE sessions SET LOGGED; Both operations require an ACCESS EXCLUSIVE lock and rewrite the entire table. On a large table, this can take significant time and will block all reads and writes due to lock contention. Plan accordingly — this is not something to attend to during peak traffic.
When to use something else
- Data must survive a crash. If losing the data on an unclean shutdown would cause real problems, use a regular logged table.
- Data must be visible on read replicas. UNLOGGED tables are not replicated.
- Shared state across a fleet without a shared database. If multiple application servers need shared ephemeral data without sharing a PostgreSQL instance, Redis or another shared data store is the right choice.
- Sub-millisecond access at extreme throughput. Redis is an in-memory data store designed for microsecond-level operations. If your access pattern requires that level of latency at sustained high throughput, PostgreSQL, even UNLOGGED, is not a substitute.
Combining all three
These three patterns are not mutually exclusive — they complement each other rather well. A realistic production architecture might use all three within the same PostgreSQL instance (for the strategic overview, see PostgreSQL Caching Without Redis):
- SKIP LOCKED for a background email queue. Jobs are enqueued by the web application and processed by a fleet of workers.
- LISTEN/NOTIFY to trigger dashboard updates when jobs complete. The job processing transaction fires a
NOTIFYon completion; a dashboard service listens and pushes updates to the frontend. - UNLOGGED tables for session storage. User sessions are stored in an UNLOGGED table for fast writes, with no concern about crash durability.
A brief note on connection management across these patterns:
| Pattern | Connection Type | Pooler Compatible |
|---|---|---|
| SKIP LOCKED | Normal pooled connection | Yes |
| LISTEN/NOTIFY (listener) | Dedicated, non-pooled connection | No — requires direct connection |
| LISTEN/NOTIFY (sender/NOTIFY) | Normal pooled connection | Yes |
| UNLOGGED tables | Normal pooled connection | Yes |
The key constraint is the listener connection for LISTEN/NOTIFY. It must be a long-lived, dedicated connection — not one managed by PgBouncer or another connection pooler in transaction mode. Everything else works normally with standard connection pooling. One dedicated connection for listening, the rest through the pooler. A manageable arrangement.
Monitoring all three:
-- LISTEN/NOTIFY: active listeners
SELECT pid, state, query, backend_start
FROM pg_stat_activity
WHERE query LIKE '%LISTEN%';
-- SKIP LOCKED: job queue health
SELECT queue, status, count(*)
FROM jobs
GROUP BY queue, status;
-- UNLOGGED tables: table sizes
SELECT relname, pg_size_pretty(pg_total_relation_size(oid))
FROM pg_class
WHERE relpersistence = 'u'
ORDER BY pg_total_relation_size(oid) DESC; The relpersistence = 'u' filter identifies UNLOGGED tables specifically. This is useful for tracking growth of ephemeral tables that do not appear in backup size calculations.
Each of these patterns has clear boundaries. LISTEN/NOTIFY handles event propagation. SKIP LOCKED handles work distribution. UNLOGGED tables handle ephemeral storage. They occupy different roles in your architecture and do not step on each other. A well-organized household, if I may say so.