PostgreSQL Caching Strategies Without Redis
Materialized views, UNLOGGED tables, LISTEN/NOTIFY — your PostgreSQL already has caching capabilities most teams never explore. Allow me to introduce them.
The cache you already have
Good evening. I should like to conduct a brief inventory, if you don't mind.
The standard architecture for most web applications follows a familiar pattern: PostgreSQL for persistence, Redis for "the fast stuff." Session storage, cached query results, leaderboards, background job queues, pub/sub events — the assumption is that PostgreSQL is too slow for these workloads, so a caching layer goes in front of it.
That assumption is worth examining together.
Redis is excellent software — genuinely well-engineered and purpose-built for in-memory data structures with sub-millisecond latency. There are workloads where it is the right tool, and this article will be explicit about what those are. But much of what Redis does in a typical application stack, PostgreSQL can do natively — using capabilities that most teams never explore because the "add Redis" reflex is so deeply ingrained.
This article covers five PostgreSQL-native capabilities that replace common Redis use cases: materialized views, UNLOGGED tables, LISTEN/NOTIFY, SKIP LOCKED, and pg_trgm. Each section includes when the PostgreSQL approach works, when it does not, and honest limitations. The thesis, the same one that runs through You Don't Need Redis, is simple: explore what you have before adding what you don't. Optimize first, scale later.
For a thorough treatment of the PostgreSQL vs Redis comparison, see PostgreSQL vs Redis for Caching.
Materialized views — precomputed query results
Allow me to begin with what I consider the most underappreciated feature in PostgreSQL's repertoire.
A materialized view is a physical snapshot of a query result, stored on disk and queryable like a table (documented in the PostgreSQL materialized views documentation). When you create a materialized view, PostgreSQL executes the underlying query and stores the result set. Subsequent reads query the stored result directly, not the underlying tables.
This is functionally identical to what many applications do with Redis: run an expensive query, store the result in Redis with a TTL, and serve subsequent reads from the cached value. The difference is that materialized views are a first-class database feature with SQL access, index support, and no external infrastructure.
Creating a materialized view
Consider a common dashboard scenario — monthly revenue aggregation across products and regions:
CREATE MATERIALIZED VIEW monthly_revenue AS
SELECT
date_trunc('month', o.created_at) AS month,
p.category,
r.name AS region,
COUNT(*) AS order_count,
SUM(o.total_amount) AS revenue,
AVG(o.total_amount) AS avg_order_value
FROM orders o
JOIN products p ON p.id = o.product_id
JOIN regions r ON r.id = o.region_id
WHERE o.status = 'completed'
GROUP BY 1, 2, 3; This query might take seconds to run against the raw tables — scanning millions of order rows, joining across tables, computing aggregates. As a materialized view, the result is precomputed. Querying it is as fast as querying any indexed table.
Querying and indexing
Materialized views support indexes, just like regular tables:
CREATE INDEX idx_monthly_revenue_month ON monthly_revenue (month);
CREATE INDEX idx_monthly_revenue_category ON monthly_revenue (category);
CREATE INDEX idx_monthly_revenue_region ON monthly_revenue (region); With these indexes in place, dashboard queries are single-digit millisecond reads:
SELECT month, revenue, order_count
FROM monthly_revenue
WHERE category = 'electronics'
AND region = 'North America'
ORDER BY month DESC
LIMIT 12; Refreshing
I should be forthcoming about a detail that matters in practice: materialized views are not automatically updated when underlying data changes. They must be explicitly refreshed:
-- Full refresh: replaces all rows, blocks reads during refresh
REFRESH MATERIALIZED VIEW monthly_revenue;
-- Concurrent refresh: non-blocking, requires a unique index
CREATE UNIQUE INDEX idx_monthly_revenue_unique
ON monthly_revenue (month, category, region);
REFRESH MATERIALIZED VIEW CONCURRENTLY monthly_revenue; REFRESH MATERIALIZED VIEW CONCURRENTLY is the production-appropriate option in most cases. It builds the new result set alongside the old one and swaps them atomically, so readers are never blocked. The tradeoff is that it requires a unique index and uses more temporary storage during the refresh.
For a complete guide on materialized view creation, indexing, and pitfalls, see How to Use PostgreSQL Materialized Views and Materialized View Pitfalls.
Automatic refresh strategies
Manual refresh is impractical for production caches. Three automated approaches are worth your consideration:
Scheduled refresh with pg_cron. The simplest approach:
-- Refresh every 15 minutes
SELECT cron.schedule(
'refresh_monthly_revenue',
'*/15 * * * *',
'REFRESH MATERIALIZED VIEW CONCURRENTLY monthly_revenue'
); Application-triggered refresh. The application explicitly refreshes the view after writes that affect the underlying data. Tighter freshness, but couples refresh logic to write paths.
Write-aware refresh with triggers. A trigger on the underlying table fires after writes and enqueues a refresh job. Tightest freshness guarantees, but can add latency to writes if the view's query is expensive.
For deeper coverage, see the book chapter on PostgreSQL Materialized Views.
When materialized views replace Redis
- Dashboard aggregations. Revenue summaries, user activity metrics, funnel conversion rates.
- Leaderboards and rankings. Top users, best-selling products, trending items.
- Summary statistics. Count of active users, total orders this month, average response time.
- Cached report data. Any pattern where the application caches an expensive query result as a Redis hash or JSON blob.
When they do not
- Sub-millisecond key-value lookups on individual records. A Redis
GETby key takes microseconds. A materialized view query involves PostgreSQL's query planning overhead. - Session data that changes on every request. Data that changes on every request would require continuous refresh, negating the benefit.
- Data shared across a server fleet with no shared database. Redis as a shared in-memory store serves this use case natively.
UNLOGGED tables — speed without the WAL
UNLOGGED tables are PostgreSQL tables that skip the write-ahead log (WAL). This results in significantly faster writes — typically 2–3x faster for write-heavy workloads.
The tradeoff is explicit: data in UNLOGGED tables is not crash-safe and is not replicated to standby servers. If PostgreSQL crashes, UNLOGGED tables are truncated during recovery.
This tradeoff is exactly the same one Redis makes by default. Redis stores data in memory with optional persistence, and many Redis deployments accept data loss on restart. UNLOGGED tables offer a similar performance profile within PostgreSQL, without a separate system.
Creating an UNLOGGED table
CREATE UNLOGGED TABLE sessions (
session_id TEXT PRIMARY KEY,
user_id BIGINT,
data JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
expires_at TIMESTAMPTZ NOT NULL
);
CREATE INDEX idx_sessions_user_id ON sessions (user_id);
CREATE INDEX idx_sessions_expires_at ON sessions (expires_at); All PostgreSQL index types work on UNLOGGED tables. Constraints, triggers, and functions work identically. The only difference is the absence of WAL writes and replication.
UNLOGGED tables as session storage
Session storage is one of the most common Redis use cases, and it is a natural fit for UNLOGGED tables. Sessions are ephemeral by design — losing them on a server restart means users need to log in again, which is an acceptable tradeoff for most applications.
-- Store a session
INSERT INTO sessions (session_id, user_id, data, expires_at)
VALUES (
'sess_abc123',
42,
'{"cart_items": 3, "last_page": "/checkout"}',
now() + INTERVAL '24 hours'
)
ON CONFLICT (session_id) DO UPDATE SET
data = EXCLUDED.data,
expires_at = EXCLUDED.expires_at;
-- Retrieve a session
SELECT data, user_id
FROM sessions
WHERE session_id = 'sess_abc123'
AND expires_at > now();
-- Delete expired sessions (run periodically via pg_cron)
DELETE FROM sessions WHERE expires_at < now(); Automated expiry cleanup can be scheduled with pg_cron:
SELECT cron.schedule(
'cleanup_expired_sessions',
'*/5 * * * *',
'DELETE FROM sessions WHERE expires_at < now()'
); Performance characteristics
For session-style workloads, UNLOGGED tables deliver single-digit millisecond latency. This is slower than Redis's microsecond latency, but faster than regular PostgreSQL tables due to the WAL bypass. For the vast majority of web applications, where the HTTP request-response cycle is measured in tens to hundreds of milliseconds, the difference between 0.1ms (Redis) and 2ms (UNLOGGED table) is not perceptible to the user. Your guests will not notice. I assure you.
The significant operational advantage is that sessions live in the same database as user accounts, permissions, and application data. Session lookups can JOIN to user data without cross-system queries.
Honest limitations
- Data does not survive a crash. If PostgreSQL crashes, UNLOGGED table data is gone. For session storage, this means users are logged out.
- Data is not replicated. Read replicas do not contain UNLOGGED table data.
- Not suitable for data that must persist. Anything that cannot be regenerated should use regular (logged) tables.
- Cross-server session sharing requires a shared database. If multiple application servers need to access the same sessions and do not share a PostgreSQL instance, a shared data store is needed.
LISTEN/NOTIFY — pub/sub without a message broker
This one deserves more attention than it receives, and I am glad to be the one to share it: PostgreSQL includes a built-in publish/subscribe messaging system. Any database connection can subscribe to named channels with LISTEN and receive messages sent with NOTIFY. No extensions, no external infrastructure. For working implementations in Python, Node.js, Go, and Ruby, see the Redis alternatives how-to guide.
-- Subscriber (Connection A)
LISTEN product_updates;
-- Publisher (Connection B)
NOTIFY product_updates, '{"product_id": 42, "action": "price_changed"}'; The payload is a text string with a maximum size of 8,000 bytes — more than enough for cache invalidation signals, event identifiers, or compact JSON messages.
LISTEN/NOTIFY is fire-and-forget by design. There is no message persistence, no replay, no acknowledgment. If a subscriber is not connected when a notification is sent, the notification is lost. The mechanism is appropriate for cache invalidation (where a missed message means slightly stale data, not data loss) and inappropriate for workflows requiring guaranteed processing.
Cache invalidation with LISTEN/NOTIFY
The most valuable use: automated cache invalidation. When data changes in PostgreSQL, notify interested application instances so they can invalidate their local caches.
CREATE OR REPLACE FUNCTION notify_product_change()
RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify(
'product_updates',
json_build_object(
'action', TG_OP,
'product_id', COALESCE(NEW.id, OLD.id)
)::text
);
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER product_change_trigger
AFTER INSERT OR UPDATE OR DELETE ON products
FOR EACH ROW EXECUTE FUNCTION notify_product_change(); With this trigger in place, every INSERT, UPDATE, or DELETE on the products table sends a notification. Application servers listening on this channel invalidate the affected cache entry when a notification arrives.
This pattern replaces the common Redis pub/sub invalidation approach. With LISTEN/NOTIFY, the invalidation signal is sent by the database itself as part of the write transaction — no application-level publish step, no second system to route through.
For a framework-specific implementation, see Symfony Messenger with PostgreSQL LISTEN/NOTIFY. The book chapter on Cache Invalidation covers the pattern in greater depth.
When LISTEN/NOTIFY is not enough
- High-throughput pub/sub. Thousands of messages per second across many channels warrants a dedicated message broker (RabbitMQ, Kafka, NATS).
- Message persistence and replay. Subscribers needing to replay missed messages need Kafka or similar durable log-based messaging.
- Fan-out to many subscribers. Each LISTEN subscriber requires an active PostgreSQL connection. Account for this in your connection pool sizing.
SKIP LOCKED — job queues without Redis or Sidekiq
SELECT ... FOR UPDATE SKIP LOCKED enables atomic job claiming in a single SQL statement. Multiple workers can concurrently select the next available job, and SKIP LOCKED ensures no two workers claim the same row. The elegance here is worth appreciating: one SQL statement, and the concurrency problem is solved.
The complete queue pattern
Create the queue table:
CREATE TABLE job_queue (
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
job_type TEXT NOT NULL,
payload JSONB NOT NULL DEFAULT '{}',
status TEXT NOT NULL DEFAULT 'pending',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
started_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
attempts INT NOT NULL DEFAULT 0,
max_attempts INT NOT NULL DEFAULT 3
);
CREATE INDEX idx_job_queue_pending
ON job_queue (created_at)
WHERE status = 'pending'; The partial index on status = 'pending' ensures that claiming the next job remains fast even as completed jobs accumulate.
Enqueue a job:
INSERT INTO job_queue (job_type, payload)
VALUES ('send_email', '{"to": "user@example.com", "template": "welcome"}'); Claim the next job (worker):
UPDATE job_queue
SET status = 'processing',
started_at = now(),
attempts = attempts + 1
WHERE id = (
SELECT id FROM job_queue
WHERE status = 'pending'
AND attempts < max_attempts
ORDER BY created_at
LIMIT 1
FOR UPDATE SKIP LOCKED
)
RETURNING *; This query atomically selects the oldest pending job that no other worker has locked, updates its status to 'processing', and returns it. If multiple workers execute this simultaneously, each gets a different job.
Complete or fail the job:
-- On success
UPDATE job_queue
SET status = 'completed', completed_at = now()
WHERE id = $1;
-- On failure (will be retried if attempts < max_attempts)
UPDATE job_queue
SET status = 'pending', started_at = NULL
WHERE id = $1; For a framework-specific implementation, see Solid Queue with PostgreSQL SKIP LOCKED.
When SKIP LOCKED replaces Redis-backed queues
- Low-to-medium throughput job queues. Hundreds of jobs per second is well within PostgreSQL's capabilities.
- Jobs derived from database data. Keeping the queue in the same database avoids cross-system references and simplifies transactional guarantees.
- Teams that want fewer moving parts. No new infrastructure to operate.
When it doesn't
- Very high throughput. Sustained thousands of jobs per second pushes PostgreSQL's row-level locking.
- Complex routing, priorities, and dead-letter queues. The SKIP LOCKED pattern is a building block, not a full-featured queue framework.
- Workloads that benefit from in-memory queuing. Redis-backed queues hold jobs in memory, eliminating disk I/O for enqueue and dequeue.
pg_trgm — fuzzy search without Elasticsearch
The pg_trgm extension provides trigram-based text similarity matching — the foundation of autocomplete, typo-tolerant search, and "did you mean?" suggestions. Many applications use Elasticsearch for fuzzy search and cache the results in Redis. pg_trgm provides a PostgreSQL-native path that dispenses with both external systems.
Setting up trigram search
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE INDEX idx_products_name_trgm
ON products USING GIN (name gin_trgm_ops); Querying with similarity
-- Find products with names similar to a (possibly misspelled) search term
SELECT name, similarity(name, 'macbok pro') AS sim
FROM products
WHERE name % 'macbok pro'
ORDER BY sim DESC
LIMIT 10; This query finds "MacBook Pro" despite the typo "macbok" — the trigram overlap is high enough to match. The GIN index makes this efficient even on tables with millions of rows.
-- Autocomplete: find products starting with or similar to partial input
SELECT name, name <-> 'iph' AS dist
FROM products
ORDER BY dist
LIMIT 10; Combining with full-text search
For applications needing both fuzzy matching (typo tolerance) and full-text search (relevance ranking, stemming, phrase matching):
-- Combined: full-text relevance + trigram fuzzy matching
SELECT
name,
ts_rank(to_tsvector('english', description), plainto_tsquery('english', 'wireless headphone')) AS text_rank,
similarity(name, 'wireless headphone') AS name_sim
FROM products
WHERE to_tsvector('english', description) @@ plainto_tsquery('english', 'wireless headphone')
OR name % 'wireless headphone'
ORDER BY text_rank * 0.6 + name_sim * 0.4 DESC
LIMIT 20; When pg_trgm replaces Elasticsearch/Redis search caches
- Autocomplete on known datasets. Product names, usernames, city names — a GIN trigram index delivers autocomplete results in low single-digit milliseconds.
- Typo-tolerant search. Similarity-based fallback finds what the user likely meant.
- Applications that cache Elasticsearch results in Redis. The PostgreSQL-native path eliminates two external systems.
When you still need Elasticsearch
- Complex relevance scoring across heterogeneous document types. Elasticsearch's BM25 scoring, field boosting, and custom analyzers provide depth that pg_trgm and tsvector do not match.
- Aggregations and faceted search on unstructured data at scale.
- Log analytics and observability. Time-series log data with high ingest rates is Elasticsearch's core domain.
Combining the strategies
Consider, if you will, a realistic application architecture:
- Dashboard data: Materialized views precompute aggregations, refreshed every 15 minutes via pg_cron. Previously: expensive queries cached as Redis hashes with TTL.
- Session storage: An UNLOGGED table stores session data with JSONB payloads and TTL-based expiry. Previously: Redis with
SET session:id data EX 86400. - Cache invalidation: LISTEN/NOTIFY triggers fire on product and pricing updates. Previously: Redis pub/sub channels.
- Background jobs: A job queue table with SKIP LOCKED processes order confirmations, email sends, and report generation. Previously: Sidekiq with Redis.
- Search: pg_trgm indexes provide autocomplete and typo-tolerant search. Previously: Elasticsearch results cached in Redis.
What this architecture removes: Redis, a message broker, a dedicated job queue service, and potentially Elasticsearch. Each removed component is a system that no longer needs deployment automation, monitoring, alerting, backup procedures, capacity planning, security patching, and incident response expertise.
What this architecture does not eliminate — and I want to be direct about this: the need for proper PostgreSQL configuration, query optimization, indexing strategy, and connection management. Consolidating workloads onto PostgreSQL works when PostgreSQL is well-tuned for those workloads.
The infrastructure simplification is concrete: one database, one backup strategy, one monitoring stack, one set of credentials to manage. For small-to-medium teams, this reduction in operational surface area can be more valuable than the performance differences between PostgreSQL and purpose-built alternatives.
Where Redis still wins
Redis is excellent software. The PostgreSQL strategies described in this article are alternatives for specific patterns, not a wholesale replacement. Allow me to be direct:
Shared ephemeral state across a fleet of stateless application servers. When dozens of servers need to read and write the same ephemeral data with no shared database, Redis as a centralized in-memory store is purpose-built for this pattern. Keep it for this. It was designed for precisely this.
Sub-millisecond key-value lookups at massive scale. Redis routinely handles millions of operations per second with sub-millisecond p99 latency. For applications where individual lookups must be measured in microseconds, Redis's performance ceiling is significantly higher.
Rate limiting with atomic counters and TTL. Redis's INCR with EXPIRE provides atomic, TTL-based counters that are the standard building block for rate limiting.
Genuine pub/sub at high throughput with many subscribers. For pub/sub with thousands of messages per second fanning out to many subscribers, Redis pub/sub is more efficient than LISTEN/NOTIFY.
For a detailed comparison of Gold Lapel and Redis, see the Redis comparison.
The line to draw is this: if your application uses Redis for caching query results, storing sessions, running background jobs, and sending cache invalidation signals — and your data already lives in PostgreSQL — the PostgreSQL-native alternatives described here can likely replace Redis entirely. If your application uses Redis for shared ephemeral state across a distributed fleet, high-throughput pub/sub, or millions of sub-millisecond key-value operations per second, Redis is the right tool for those workloads.
Simplicity is not the absence of capability. It is the discipline to use existing capability before adding new complexity.