← You Don't Need Elasticsearch

Chapter 16: The Migration

The Waiter of Gold Lapel · Published Apr 12, 2026 · 11 min

I have, across fifteen chapters, made an argument. Allow me now to make it practical.

Knowing that PostgreSQL can replace Elasticsearch is one thing. Replacing it — on a Tuesday, with production traffic, with a team that has other priorities and a manager who needs assurance that nothing will break — is another thing entirely. The distance between “I’m convinced” and “we’re doing this” is not technical. It is emotional. It is the anxiety of changing something that works, even when you know it could work better.

I understand this anxiety. I have guided transitions before. And I can tell you that the path is known, the footing is sound, and every step is reversible until you choose otherwise. No one is asking you to leap. I am asking you to walk, at your own pace, with the ability to turn back at any point.

This chapter provides the playbook. Seven phases. Each incremental. Each reversible. At no point does a user see an error, an outage, or a degraded result because of the migration. The migration is invisible to your users from start to finish. That is not a goal — it is the design.

Phase 1: Audit

What Elasticsearch features are you actually using?

This is the most important question in the migration, and I can tell you from experience that the answer almost always surprises the team asking it. Most teams use 3–4 of Elasticsearch’s capabilities. The rest was configured during initial setup, copied from a tutorial, or added speculatively for a feature that never shipped.

Query your Elasticsearch access logs. Which API endpoints are called? How often? By which services? Which indices are queried regularly, and which are simply there — created years ago, never removed, consuming resources without contributing value?

The audit typically reveals something like: “We thought we were using all of Elasticsearch. We’re using match queries, the completion suggester, and terms aggregations. That’s it.” Three features. Three rows in the mapping table. Three chapters in this book.

Some teams discover they’re using more — percolator, geo queries, complex aggregation pipelines. That is fine. The audit tells you the scope, which determines the timeline. A three-feature migration takes weeks. A ten-feature migration takes longer. Both are manageable when you know what you’re managing.

The audit turns “should we migrate?” from a philosophical debate into a scoped project with defined requirements. I find that a considerable improvement.

Phase 2: Map

For each Elasticsearch feature identified in the audit, map it to the corresponding PostgreSQL capability:

ES FeatureGL MethodPostgreSQL MechanismBook Chapter
match querysearch()tsvector + GINCh4
fuzzy querysearch_fuzzy()pg_trgm similarityCh5
completion suggestersuggest()ILIKE + trigram GINCh9
terms aggregationfacets()GROUP BY + COUNTCh10
metric aggregationsaggregate()AVG/SUM/MIN/MAXCh10
percolatorpercolate()stored tsquery + GINCh11
kNN searchsimilar()pgvector + HNSWCh8
custom analyzercreate_search_config()TEXT SEARCH CONFIGURATIONCh12
_analyze APIanalyze()ts_debug()Ch12
_explain APIexplain_score()tsvector + ts_rank inspectionCh12
phonetic pluginsearch_phonetic()fuzzystrmatchCh6
highlightincluded in search()ts_headline()Ch4
combined scoringhybrid search patternRRF with CTEsCh13

This table is the migration’s technical specification. Every row is a feature you are replacing. Every row has a chapter you have already read. The audit from Phase 1 tells you which rows apply to your project. Most migrations involve three to five rows. Some involve more. None involve a capability that this book has not already demonstrated.

I would suggest keeping this table in your migration proposal. It answers the question your engineering manager will ask: “How do we know PostgreSQL can do what Elasticsearch does for us?” The answer is thirteen rows, each with a chapter reference.

Phase 3: Build

Build the PostgreSQL search infrastructure:

  1. Design the materialized view(s). Chapter 3’s pattern, informed by Chapter 14’s architecture for your use case. This is the most important decision in the migration — the materialized view is the foundation. Get it right, and the rest follows naturally.
  2. Install Gold Lapel and configure the proxy. Connect to your PostgreSQL database.
  3. Connect the language wrapper. goldlapel.start() in your application.
  4. Verify results. Run the search methods against your data. Compare results against Elasticsearch’s output on the same queries.

The build phase should take days to a week for a typical application. Test on real data, not sample data — import a copy of your production data into a staging environment and run the search methods against it. The staging environment is where surprises should happen, not production.

Phase 4: Dual-Write

Both systems receive writes simultaneously. The application writes to the primary database (which feeds the materialized view) and continues syncing to Elasticsearch. The materialized view refreshes on its schedule. The Elasticsearch sync pipeline continues. Nothing changes for the user. Both systems are current.

This is the dual-write pattern, well-documented by Stripe’s engineering blog: during migration, both the old and new systems receive all writes, allowing you to compare their outputs without risking data loss. The pattern is battle-tested. It is how careful teams migrate anything.

Risks, because I would rather you know them:

Consistency. The materialized view refresh introduces a small lag — seconds to minutes depending on refresh interval. Elasticsearch’s sync pipeline has its own lag. The two systems will be slightly out of sync. Measure the delta. For most applications, it is well within tolerance.

Increased write load. Writing to both systems adds load to the write path. For most applications, this is negligible. For write-heavy workloads, monitor both systems during dual-write. If the load is concerning, it is temporary — dual-write ends when shadow testing is complete.

Duration. Dual-write should run long enough to build confidence — days to weeks — but not indefinitely. Set a timeline. Evaluate at the end. Extend if needed. But do not let dual-write become a permanent state. Permanent dual-write is not a migration — it is two systems.

Phase 5: Shadow

PostgreSQL handles search queries in shadow mode. The application sends each query to both Elasticsearch and PostgreSQL, returns Elasticsearch’s results to the user, and logs PostgreSQL’s results for comparison. The user sees nothing different. You see everything.

Compare:

  • Latency: P50, P95, P99 for both systems. Is PostgreSQL meeting your requirements?
  • Result overlap: What percentage of PostgreSQL’s top-10 matches Elasticsearch’s top-10? 80%+ overlap is typical for equivalent configurations.
  • Relevance quality: Are the results equivalent? Different but acceptable? Noticeably worse in specific cases?

If results diverge significantly, debug with analyze() and explain_score() from Chapter 12. The divergence is usually a stemming or language configuration difference — PostgreSQL’s english text search config may stem differently than Elasticsearch’s standard analyzer. Custom text search configurations can align the behavior. The tools for diagnosing this are in Chapter 12. The fix is usually a configuration adjustment, not a code change.

Run shadow mode for as long as you need to build confidence. Days, weeks, a month. There is no rush. I would rather you take an extra week in shadow mode than cut over a day too early. Confidence is not a luxury in a migration. It is the product.

Phase 6: Cutover

Flip the switch. The application now reads from PostgreSQL instead of Elasticsearch.

Elasticsearch is still running. Still receiving writes. Still available for rollback.

Monitor: latency, error rates, search quality. If anything is wrong, flip back.

The cutover should be a feature flag, not a deployment. Toggle it in configuration, not in code. The rollback is flipping the flag back — instant, no deployment required, no downtime, no incident. The difference between a feature flag and a deployment rollback is the difference between flipping a light switch and rewiring the building. I recommend the light switch.

Keep Elasticsearch running during the rollback window. The dual-write continues. If you need to revert, the path back is the same path forward, just in the other direction.

Phase 7: Decommission

Once confident — days to weeks after cutover:

  1. Stop the Elasticsearch sync pipeline
  2. Stop the Elasticsearch cluster
  3. Remove the Elasticsearch client from the codebase
  4. Remove the Elasticsearch infrastructure — servers, containers, monitoring dashboards, alerting rules

I would like to be direct about what this moment feels like, because I have seen it before.

No more JVM heap tuning. No more index mapping changes that require reindexing. No more data sync pipeline to maintain, to monitor, to debug when it falls behind. No more Elasticsearch cluster to run alongside the database you were already running. No more 3 AM alerts for a search service that was, in the end, a second copy of data that already lived in PostgreSQL.

The search is in the database. The database is what you were already operating. One system instead of two. One source of truth instead of two sources that must be kept in agreement. The operational burden does not reduce — it lifts.

Decommission is the only irreversible step. Everything before it can be rolled back. By the time you reach decommission, you have had weeks of production validation. The confidence is not assumed. It is earned, phase by phase, with data at every step.

Addressing Real Fears

The seven phases address the technical path. This section addresses the emotional one — because the fears are real, and I would not be doing my job if I pretended they were not.

“What if relevance is different?”

It will be, slightly. PostgreSQL’s ranking (ts_rank) and Elasticsearch’s ranking (BM25) use different algorithms. The top results are usually the same; the ordering of middle-ranked results may differ. Debug with analyze() and explain_score() from Chapter 12. Tune with custom text search configurations. If you are using hybrid search (Chapter 13), the pgvector semantic signal may actually improve relevance over Elasticsearch’s lexical-only ranking. Some teams find their search gets better after migration, not just equivalent.

“What about zero-downtime?”

The dual-write + shadow + feature-flag-cutover pattern ensures zero user-facing downtime. The migration is incremental and reversible at every step. At no point does a user see an error page, a degraded result, or a moment of unavailability because of the migration.

“What if we need to roll back?”

During the cutover phase, Elasticsearch is still running and receiving writes. Rollback is flipping the feature flag back. Instant. The only irrecoverable step is decommission (Phase 7), and by that point you have had weeks of production validation. The question is not “can we roll back?” — you can, at every step. The question is “will we need to?” — and the shadow phase exists to answer it before cutover.

“How long does it take?”

Audit + Map: days. Build: days to a week. Dual-write + Shadow: weeks. Cutover: a feature flag. Decommission: whenever you are confident. Total: 4–8 weeks for a typical application. More for complex setups with custom analyzers, percolator usage, and extensive aggregation pipelines. Less for applications using only full-text search and autocomplete.

The Migration at a Glance

PhaseDurationReversible?User Impact
1. AuditDaysN/ANone
2. MapDaysN/ANone
3. BuildDays–weekN/ANone
4. Dual-writeWeeksYesNone
5. ShadowWeeksYesNone
6. CutoverInstant (flag)Yes (flip back)None
7. DecommissionHoursNoNone

User Impact: None at every phase. I would ask you to notice that column. The migration is invisible to your users from the first day of the audit to the last hour of the decommission. That was the design goal, and it is achievable because every phase runs alongside the existing system, not in place of it.

Honest Boundary

Migration from Elasticsearch is straightforward for common use cases — full-text search, autocomplete, facets, basic fuzzy matching. It is more complex for teams using Elasticsearch’s full query DSL extensively — nested queries, parent-child relationships, geo-distance filters combined with text search. These are achievable in PostgreSQL but require more mapping work in Phase 2 and more testing in Phase 5.

Teams using Elasticsearch for infrastructure observability — log aggregation, APM tracing, metrics dashboards — should note that this is a different product category. This migration covers application search. The observability stack is a separate decision, and this book does not presume to make it for you.

The dual-write period increases operational complexity temporarily. You are running two search systems simultaneously. This is the cost of a safe migration. It is temporary. And it is considerably less costly than a migration that breaks production.

The migration path is known. Seven phases. Each incremental. Each reversible until decommission. The playbook is not theoretical — audit what you use, map it to PostgreSQL, build it, run both systems in parallel, compare with data, cut over with a flag, decommission when you are confident.

Others have walked this path. I have guided them. The ones who were most anxious at Phase 1 were often the most relieved at Phase 7. The operational simplification — from two systems to one, from eventual consistency to ACID, from a sync pipeline to a materialized view — is not an abstraction. It is felt. Daily. By the team that no longer maintains the second system.

One matter remains in this book. The parity is complete. The migration is practical. The scaling path is clear. But I would not be serving you well if I ended without a candid conversation about what choosing PostgreSQL search asks of you — not in features, where the parity is demonstrated, but in configuration, where the responsibility shifts. Chapter 17 addresses this directly.

A good recommendation includes its costs. I intend to be thorough about both sides.