Gold Lapel vs Redis: A Second Database Shouldn't Be the First Instinct
If you'll permit me a moment of candour — most teams hire Redis to do a job their existing staff could handle, if only someone had bothered to train them.
The arrangement I find in most households
Allow me to describe the architecture I encounter with striking regularity. Your application checks Redis first. Misses. Queries Postgres. Stores the result in Redis. Returns it. Every write requires manually invalidating the relevant Redis keys — and hoping, with the quiet optimism of someone who has never been woken at 3am by a stale cache bug, that you didn't miss one.
You are maintaining two databases, writing invalidation logic by hand, debugging cache coherency issues, and paying for Redis infrastructure. All to avoid hitting a database that is perfectly capable of serving your queries quickly — if it had the right indexes and materialized views.
I find this genuinely puzzling. The household already has competent staff. The response to slow service has been to hire an entirely separate staff member, station them in the hallway, and have them memorise the answers to questions that the first staff member could answer perfectly well if anyone had taken a moment to organise their filing system.
The question I would gently put to you: is Redis solving a performance problem, or masking one?
What Gold Lapel replaces
GL sits between your application and Postgres as a transparent proxy. Same query, same connection string — change the port from :5432 to :7932. That is the extent of the disruption. GL handles caching automatically. No SET/GET calls, no manual invalidation, no Redis infrastructure to tend to.
Every query result is cached in GL's local memory on first execution. The second time the same query arrives, it is served directly — no round-trip to Postgres. When a write touches a table, GL automatically invalidates every cached result that references it. No application code needed. No cache keys to name, no TTLs to guess at, no invalidation calls to scatter through your codebase like breadcrumbs you hope you haven't dropped.
| Capability | Gold Lapel | Redis |
|---|---|---|
| Query caching | Automatic — every query cached on first hit | Manual SET/GET per query |
| Cache invalidation | Automatic — GL sees the write and invalidates instantly | Manual — you write delete calls after every write |
| Network hops | In-process memory — no network | TCP round-trip (even on localhost) |
| Cache latency | ~0.1–0.3ms (in-process) | ~0.5ms (localhost TCP) |
| Stale data risk | Zero — GL sees every write through the same proxy | High — missed invalidation = stale reads |
| Infrastructure | Zero — runs as a single binary alongside your app | Separate server, monitoring, failover, memory management |
| Code changes | Zero — same SQL, same driver, change the port | Every query needs cache logic (check, miss, store, invalidate) |
| Connection pooling | Built-in (session + transaction mode) | No |
| N+1 detection | Automatic detection + batch prefetch | No |
| Index creation | Automatic B-tree, trigram, expression, partial indexes | No |
| Query rewriting | Automatic matview-based rewriting | No |
| Materialized views | Automatic creation and refresh | No |
The benchmark — if I may be specific
An 8-table analytics query across 1.3 million rows:
- Direct Postgres: 12,000ms
- Redis (assuming you cached it manually): ~0.5ms
- Gold Lapel local cache: ~0.3ms
GL is architecturally faster. The physics are straightforward:
- Redis: app → TCP to Redis process → deserialize → return → TCP back. Even on localhost, that is a full TCP round-trip.
- GL: app → GL process (already in the data path) → hashmap lookup → return. No extra TCP hop — the query is already flowing through GL.
There is no network overhead to eliminate because there is no network involved. The cache lives in the same process that is already handling the query. One does not send a letter to the butler standing in front of you.
But the number that deserves your attention is not the cache latency. It is this: GL also creates materialized views and indexes that make the uncached query faster. That 12,000ms query drops to 3.7ms via a materialized view — before the cache is consulted at all. Redis does not optimise the underlying query. It hides the problem behind a cache, which is rather like treating a headache by dimming the lights. The headache is still there. You have simply made it harder to notice.
What about the other duties Redis performs?
A fair question, and one I am glad you raised. Redis is not merely a cache — it also handles pub/sub, sessions, job queues, rate limiting, and more. But Postgres handles all of these natively. Most teams simply do not know that, in the same way that most households do not realise the groundskeeper can also repair the plumbing.
| Use case | Redis approach | Postgres approach |
|---|---|---|
| Pub/sub | SUBSCRIBE / PUBLISH | LISTEN / NOTIFY (built into Postgres) |
| Session storage | SET with TTL | Sessions table — GL caches reads at sub-ms |
| Job queues | Lists, Streams | SKIP LOCKED + pg_notify (battle-tested at scale) |
| Rate limiting | INCR + EXPIRE | Counter table with window functions |
| Leaderboards | Sorted sets (ZSET) | Materialized view — GL creates and refreshes automatically |
| Full-text search | RedisSearch | pg_trgm + tsvector — GL auto-indexes LIKE/ILIKE patterns |
| Geospatial | GEO commands | PostGIS (the industry standard) |
You do not need a second database for any of these. Postgres already has the capabilities — LISTEN/NOTIFY for pub/sub, SKIP LOCKED for job queues — they have been there for years, quietly waiting to be called upon. It is a remarkable thing, hiring additional staff when the existing staff have skills listed on their CV that no one has bothered to read.
When to keep Redis — and I do mean this
I should be forthcoming, because a butler who overstates his case is no butler at all. Redis is excellent software. Genuinely. It is the right tool when:
- Shared ephemeral state across servers — you need sub-millisecond access to data that does not require database durability, shared across a fleet of application servers with no single source of truth. This is Redis at its finest, and nothing I have said should dissuade you from using it here.
- Redis Streams for event sourcing — a specialised use case where Redis's stream data type has no direct Postgres equivalent. Credit where it is due.
- Mature Redis infrastructure — your team has deep Redis expertise, monitoring is solid, and the operational cost is acceptable. A well-run household does not dismiss competent staff on a whim.
My concern is narrower than it may appear. I am not suggesting Redis has no place in your architecture. I am suggesting that its most common deployment — as a query cache bolted onto Postgres because someone decided the database was slow — is solving the wrong problem. For that particular duty, GL is simpler, faster, and requires no infrastructure at all.
The migration — I shall try not to overcomplicate things
# Before: App → Redis → Postgres
REDIS_URL=redis://localhost:6379
DATABASE_URL=postgres://user:pass@localhost:5432/mydb
# Your code:
# 1. Check Redis for cached result
# 2. Cache miss → query Postgres
# 3. Store result in Redis
# 4. On write → manually invalidate Redis keys
# 5. Hope you didn't miss one # After: App → Gold Lapel → Postgres
DATABASE_URL=postgres://user:pass@localhost:7932/mydb
# That's it. Delete your Redis cache logic.
# GL learns your query patterns and caches automatically.
# Writes invalidate the cache — no manual invalidation. Remove your Redis cache logic. Change your connection port. GL learns your query patterns and begins optimising on the first query. No cache warming. No invalidation code to maintain. No ceremony whatsoever.
I appreciate that "just change the port" sounds like the sort of thing that is never actually that simple. In this case, it is. The most involved part of the migration is deleting code — the SET/GET calls, the invalidation hooks, the TTL guesswork. I have always found that removing complexity is the most satisfying form of engineering.
Verdict
If you added Redis to make your Postgres queries faster, Gold Lapel replaces that duty entirely — with less infrastructure, less code, and lower latency. Your application sends the same SQL it always has. GL attends to the rest.
If you use Redis for things beyond query caching — pub/sub, streams, shared ephemeral state — keep it for those. It earns its keep there. But the query caching layer, the manual invalidation, the stale data bugs at 3am? That work belongs to someone who can see the writes as they happen and act accordingly.
Simplicity is not the absence of capability. It is the discipline to use existing capability before adding new complexity.