← You Don't Need Redis

What’s Next

The Butler of Gold Lapel · Published Apr 6, 2026 · 3 min

You Don’t Need Elasticsearch

I have, throughout this book, confined my remarks to the matter of Redis. The caching layer. The job queue. The session store. The pub/sub broker. I have demonstrated, in thirteen chapters and seven frameworks, that PostgreSQL handles these responsibilities without assistance. I trust the case has been made.

But there is another service in your infrastructure that has been watching nervously from across the room, hoping I would not turn my attention to it.

I am turning my attention to it.

Elasticsearch is, for a certain class of problem, a magnificent piece of engineering. Distributed search across terabytes of log data, real-time analytics on millions of events per second, cluster-scale full-text indexing — these are genuine capabilities that PostgreSQL does not replicate. If you operate at that scale, Elasticsearch has earned its place in your household, and I would not presume to dismiss the staff.

Most of you do not operate at that scale.

Most of you added Elasticsearch because your product search was slow, or your autocomplete felt sluggish, or someone mentioned “full-text search” in a planning meeting and the team reached for the tool they had heard of rather than the tool they already had. You are now running a JVM-based distributed search cluster — with its own nodes, its own memory requirements, its own index management, its own version upgrades, and its own 3 AM alerts — to serve queries that PostgreSQL could answer from a single index scan.

Allow me to introduce the staff you did not know you had.

tsvector provides lexical full-text search — tokenization, stemming, stop word removal, boolean operators, phrase matching, and relevance ranking via ts_rank. It is PostgreSQL’s inverted index, and it has been available since version 8.3, released in 2008. GIN indexes make it fast.

pgvector provides semantic similarity search — vector embeddings, cosine distance, Euclidean distance, and approximate nearest neighbor search via HNSW indexes. When your user searches for “comfortable office chair” and expects to find results for “ergonomic desk seating,” this is the extension that understands why.

pg_trgm provides fuzzy matching — trigram similarity for typo tolerance, autocomplete, and “did you mean” suggestions. When your user types “postgre” and expects to find “PostgreSQL,” trigrams handle it. GIN and GiST indexable.

fuzzystrmatch provides phonetic matching — Levenshtein distance for edit-distance calculations, Soundex and Metaphone for “sounds like” queries. When your user searches for “Shmidt” and you need to find “Schmidt,” this is the extension that does not judge the spelling.

Together:

Elasticsearch ≈ tsvector + pgvector + pg_trgm + fuzzystrmatch

Four PostgreSQL extensions. Zero additional services. Zero additional infrastructure. Zero JVM heap tuning at 3 AM.

And here is the detail that connects this to the book you have just read: the ideal surface for these search indexes is a materialized view. Denormalize your searchable content into a materialized view. Add a tsvector generated column. Add a vector column for embeddings. Index both. The materialized view refreshes on a schedule — the same schedule you configured in Part III — and your search indexes refresh with it. No real-time indexing pipeline. No sync daemon. No eventual consistency between your database and your search cluster. One database. One refresh. One source of truth.

The next book will cover this in the detail it deserves. The next version of Gold Lapel will automate it.

Until then, the equation is on the wall. The extensions are in your PostgreSQL. The Elasticsearch invoice is, I respectfully suggest, worth reviewing.