PostGIS Query Optimization for Web Developers
You need "find nearby." You do not need a GIS degree. Allow me to attend to the performance details on your behalf.
The one PostGIS query pattern most web developers need
Good evening. I see you have arrived with a spatial query and a suspicion that it should be faster. You are correct, and I am glad you came.
Most web applications that use PostGIS need exactly one spatial operation: "find things near a point." A restaurant finder, a store locator, a ride-sharing app matching drivers to riders, a real estate search within a radius. The query is the same in every case. A user provides coordinates — usually from a browser's Geolocation API or a geocoded address — and the application returns the nearest results, sorted by distance.
PostGIS makes this straightforward, but the difference between a well-optimized "find nearby" query and a naive one is not incremental. On a table with a million rows, the right approach runs in under a millisecond. The wrong approach takes seconds. The queries look almost identical, and that is precisely what makes PostGIS performance a matter requiring careful attention from developers encountering it for the first time.
This guide covers every performance consideration that matters for spatial queries in a web application. You are a backend developer who needs "find nearby" to be fast — not a GIS analyst performing watershed delineation. I have prepared accordingly.
Setting up: a table with spatial data
Before we discuss optimization, permit me to ensure the foundation is correct. You need a table with a spatial column and a GiST index. This is the minimal proper setup for any web application that stores locations:
CREATE TABLE locations (
id serial PRIMARY KEY,
name text NOT NULL,
category text NOT NULL,
location geometry(Point, 4326)
);
-- The GiST index is non-negotiable
CREATE INDEX idx_locations_gist ON locations USING GiST (location); Three details to note. The column type is geometry(Point, 4326) — this stores points using SRID 4326 (WGS 84), the coordinate system used by GPS and virtually every geocoding API. The GiST index is created immediately — I cannot stress this enough, and I shall explain why shortly. And the column stores geometry, not geography. The reason for that choice is covered in the next section.
-- Longitude first, latitude second — PostGIS follows the ISO standard
INSERT INTO locations (name, category, location)
VALUES
('Central Perk', 'cafe', ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)),
('Blue Bottle', 'cafe', ST_SetSRID(ST_MakePoint(-73.9862, 40.7527), 4326)),
('Russ & Daughters', 'restaurant', ST_SetSRID(ST_MakePoint(-73.9880, 40.7222), 4326)); A common mistake, and one that I find causes a great deal of unnecessary distress: PostGIS follows the ISO standard where ST_MakePoint takes longitude first, latitude second. Google Maps and most user-facing tools display latitude first. Reversing them puts your New York cafe in the Atlantic Ocean off the coast of Somalia. I would suggest verifying your first few inserts on a map before loading a million rows.
Geometry vs geography: a distinction that deserves your full attention
PostGIS offers two spatial data types, and the choice affects both correctness and speed. I have seen this particular confusion produce some truly unfortunate bugs, so if you will indulge me, I shall be thorough.
Geometry performs calculations on a flat Cartesian plane. It is fast because the math is simple — Euclidean distance, planar intersection, bounding box overlap. But for SRID 4326 (latitude/longitude), distances are returned in degrees, not meters. A "distance" of 0.004 is, I regret to say, meaningless to your API consumer.
Geography performs calculations on a spheroid — the actual shape of the Earth. Distances are always in meters. The math is more expensive because it involves trigonometry on a curved surface, and fewer PostGIS functions support the geography type directly.
-- Geometry: coordinates are on a flat plane
-- Distances are in the units of the coordinate system (degrees for SRID 4326)
CREATE TABLE places_geom (
id serial PRIMARY KEY,
location geometry(Point, 4326)
);
-- Geography: coordinates are on the Earth's spheroid
-- Distances are always in meters
CREATE TABLE places_geog (
id serial PRIMARY KEY,
location geography(Point, 4326)
); The performance difference is real. Geography calculations are measurably slower than geometry calculations — the spheroidal math adds overhead per row. For "find nearby" queries, this overhead is applied to every candidate that survives the index filter. On a table with a million densely packed points, the difference between geometry and geography operations can be significant.
The recommended pattern: store geometry, cast to geography
If I may offer a firm recommendation: store data as geometry(Point, 4326) and cast to geography in queries where you need meter-based distances:
-- The recommended pattern: store as geometry, cast to geography for distance
-- You get the full PostGIS function library plus accurate metric distances
-- Store as geometry (more functions available, faster for non-distance ops)
CREATE TABLE venues (
id serial PRIMARY KEY,
name text NOT NULL,
location geometry(Point, 4326)
);
CREATE INDEX idx_venues_location ON venues USING GiST (location);
-- Query with geography cast for accurate meter-based distance
SELECT name, round(ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography
)::numeric, 0) AS distance_m
FROM venues
WHERE ST_DWithin(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography,
1000
)
ORDER BY distance_m; This gives you the full PostGIS function library (many functions only work with geometry), faster operations for non-distance queries, and accurate meter-based distances when you need them. The cast ::geography is applied only to the rows that survive the index filter, so the overhead is minimal.
The GiST index: there is no alternative
I shall be direct. A GiST (Generalized Search Tree) index on a spatial column is not a performance recommendation. It is a requirement. A spatial column without a GiST index is a sequential scan waiting to happen on every query, and I find this personally concerning. Without it, every spatial query scans every row. With it, PostGIS eliminates the vast majority of rows before performing expensive geometric calculations.
-- Without a GiST index: every query scans every row
-- With a GiST index: bounding box elimination, then exact check
-- Step 1: Create the index
CREATE INDEX idx_locations_gist ON locations USING GiST (location);
-- Step 2: Run ANALYZE so the planner knows the index exists
ANALYZE locations;
-- The index stores bounding boxes, not exact geometries.
-- For a query like ST_DWithin(location, point, 800):
-- 1. The index eliminates rows whose bounding box does not overlap
-- the search bounding box (fast, O(log n))
-- 2. PostGIS then checks exact geometry against surviving candidates
-- 3. On a 1M row table, step 1 might reduce candidates to 200
-- Step 2 checks 200 rows instead of 1,000,000 The GiST index stores a bounding box for each geometry. When a query asks "find all points within 800 meters of this location," the index first identifies which bounding boxes overlap the search area. This step is fast — logarithmic time, not linear. PostGIS then performs exact distance calculations only on the surviving candidates. On a table with a million rows, the index might reduce candidates from 1,000,000 to 200. Those 200 exact calculations are negligible. The million exact calculations you avoided are not.
Functions that use the GiST index include ST_DWithin, ST_Contains, ST_Intersects, ST_Within, and ST_Covers. These are the spatial relationship functions — they answer yes/no questions about two geometries, and the index accelerates the "no" answers.
Functions like ST_Distance, ST_Area, and ST_Length do not use the index. They compute a value for every row they receive. If you put ST_Distance in a WHERE clause, PostgreSQL must evaluate it against every row in the table regardless of whether a GiST index exists. This brings us to what I consider the most consequential optimization in this entire guide.
ST_DWithin vs ST_Distance: the optimization I would be remiss not to emphasize
This is the single most impactful optimization for PostGIS queries in web applications, and it is the mistake most commonly made by developers who are new to spatial databases. I encounter it with such regularity that I have developed a certain... sensitivity to the pattern.
The correct approach — use ST_DWithin in the WHERE clause for filtering, and ST_Distance in the SELECT clause for displaying the actual distance:
-- Find locations within 800 meters of a point
-- ST_DWithin uses the GiST index — this is the correct pattern
SELECT name, category,
round(ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography
)::numeric, 0) AS distance_m
FROM locations
WHERE ST_DWithin(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography,
800 -- meters
)
ORDER BY distance_m; The slow pattern — and I use "slow" with restraint — using ST_Distance for both filtering and display:
-- The slow version — DO NOT use this for filtering
-- ST_Distance computes the distance for every row, then filters
SELECT name, category,
ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography
) AS distance_m
FROM locations
WHERE ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography
) < 800; -- cannot use the spatial index These two queries return identical results. The performance difference is the gap between "uses the index" and "does not use the index." On a table with a million points, that gap is typically three orders of magnitude. Same results. Ten thousand times the cost. I trust you see why this warrants attention.
The reason: ST_DWithin is an index-aware function. It first checks whether the bounding boxes overlap (using the GiST index), then performs the exact distance check only on candidates. ST_Distance is a measurement function — it returns a number, and PostgreSQL has no way to use an index to accelerate "is this number less than 800?" It must compute the distance for every row, then filter.
The EXPLAIN ANALYZE proof
Run both queries through EXPLAIN ANALYZE to see the difference:
EXPLAIN (ANALYZE, BUFFERS)
SELECT name FROM locations
WHERE ST_DWithin(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography,
800
);
-- With GiST index:
Bitmap Heap Scan on locations (cost=4.42..39.12 rows=5 width=32)
(actual time=0.08..0.14 rows=6 loops=1)
Recheck Cond: (location::geography && _st_expand(geography, 800))
Filter: st_dwithin(location::geography, geography, 800)
Heap Blocks: exact=3
Buffers: shared hit=6
-> Bitmap Index Scan on idx_locations_gist
(cost=0.00..4.42 rows=15 width=0)
(actual time=0.05..0.05 rows=15 loops=1)
Index Cond: (location::geography && _st_expand(geography, 800))
Buffers: shared hit=3
Planning Time: 0.21 ms
Execution Time: 0.19 ms The plan shows a Bitmap Index Scan on idx_locations_gist — the GiST index identified candidates, and the filter refined them. Total: 0.19ms, 6 buffer hits.
EXPLAIN (ANALYZE, BUFFERS)
SELECT name FROM locations
WHERE ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.9857, 40.7484), 4326)::geography
) < 800;
-- No index — sequential scan on every row:
Seq Scan on locations (cost=0.00..48291.00 rows=333333 width=32)
(actual time=0.04..1842.91 rows=6 loops=1)
Filter: (st_distance(location::geography, geography) < 800)
Rows Removed by Filter: 999994
Buffers: shared hit=8334 read=23012
Planning Time: 0.12 ms
Execution Time: 1843.18 ms No index. Sequential scan. 999,994 rows removed by filter. 23,012 pages read from disk. 1,843ms. The same six results, ten thousand times slower. One does not need a GIS degree to find this objectionable.
What SRID should I use?
For web applications, the answer is almost always SRID 4326 (WGS 84), and I see no reason to equivocate. This is the coordinate system used by GPS, Google Maps, Mapbox, all geocoding APIs, and the HTML5 Geolocation API. Your input data arrives in 4326. Your output data is consumed by mapping libraries that expect 4326. Storing in any other SRID adds a transformation step that provides no benefit for typical "find nearby" use cases.
A few situations where you might use a different SRID:
- Area calculations in square meters — SRID 4326 measures area in square degrees, which is meaningless. Transform to a local projected CRS (like a UTM zone) or use the geography type for accurate metric area.
- High-precision distance within a small region — a local projected CRS like UTM gives sub-meter accuracy without the overhead of spheroidal math. Only worth the complexity if your application is region-specific and precision matters.
- Importing data that uses a different CRS — government datasets often use national grids (British National Grid, NAD83 State Plane, etc.). Transform to 4326 on import with
ST_Transformrather than storing the original CRS.
Do not store data in SRID 3857 (Web Mercator). Web Mercator is a projection designed for tile rendering, not for distance or area calculations. The distortion increases with latitude — a 1km radius search near the equator and near the poles would need different radius values. Mapping libraries handle the 4326-to-3857 projection on the client side.
SRID mismatch: the sort of problem that makes one question everything
If you pass a point with the wrong SRID to a spatial function, PostGIS may silently return incorrect results rather than raising an error. I have a particular sympathy for developers who encounter this bug, because the symptoms are maddening — your query runs, returns results, and the results are quietly wrong:
-- SRID mismatch: a silent performance and correctness problem
-- Your table uses SRID 4326
SELECT * FROM locations
WHERE ST_DWithin(
location,
ST_SetSRID(ST_MakePoint(-73.98, 40.74), 3857), -- wrong SRID
800
);
-- This either errors or silently returns wrong results
-- Always match SRIDs:
SELECT * FROM locations
WHERE ST_DWithin(
location::geography,
ST_SetSRID(ST_MakePoint(-73.98, 40.74), 4326)::geography,
800
); Always ensure the SRID of your query point matches the SRID of your stored column. Use ST_SetSRID when constructing points, and verify with ST_SRID(location) if you suspect a mismatch.
Common PostGIS performance mistakes — and how to set them right
I have had occasion to review a considerable number of PostGIS configurations in web applications, and the same mistakes appear with a regularity that borders on tradition. Every one of them is easy to fix once identified. Allow me to present them plainly.
| Mistake | Symptom | Fix |
|---|---|---|
| Missing GiST index | Seq Scan on every spatial query | CREATE INDEX ... USING GiST (location) |
| ST_Distance in WHERE clause | Full table scan, distance computed for every row | Use ST_DWithin for filtering, ST_Distance for display |
| SRID mismatch between column and query | Wrong results or errors | Match SRIDs — use ST_SetSRID or ST_Transform |
| Using geometry distances as meters | Distances returned in degrees (e.g., 0.004) | Cast to geography or use ST_Transform to a metric CRS |
| No ANALYZE after bulk data load | Planner ignores the GiST index | Run ANALYZE on the table after loading data |
| ST_Transform in the WHERE clause | Transform computed per row, index bypassed | Store data in the target SRID, or transform the search point instead |
| Computing distances on every API request | High latency under load | Pre-compute with a materialized view, refresh on schedule |
Detecting missing GiST indexes
The most common mistake — and I say this without hesitation — is a missing GiST index on a spatial column. You can query the system catalogs to verify the state of your household:
-- A common scenario: the table exists, data is loaded, queries work.
-- But there is no GiST index. Everything is a sequential scan.
-- Check if your spatial columns have indexes:
SELECT
t.relname AS table_name,
a.attname AS column_name,
i.relname AS index_name
FROM pg_class t
JOIN pg_attribute a ON a.attrelid = t.oid
JOIN pg_type ty ON ty.oid = a.atttypid
LEFT JOIN pg_index idx ON idx.indrelid = t.oid
AND a.attnum = ANY(idx.indkey)
LEFT JOIN pg_class i ON i.oid = idx.indexrelid
AND i.relam = (SELECT oid FROM pg_am WHERE amname = 'gist')
WHERE ty.typname IN ('geometry', 'geography')
AND t.relkind = 'r'
ORDER BY t.relname, a.attname;
-- If index_name is NULL, you are missing a GiST index. If index_name is NULL for any spatial column that participates in WHERE clauses, create a GiST index on it immediately. I cannot overstate this. This single change is responsible for the largest performance improvements in most PostGIS optimization work.
Pre-computing spatial results: why repeat work the household has already done?
Materialized views are the most effective strategy for spatial queries that are expensive to compute and can tolerate slightly stale data. Instead of calculating distances on every request, compute them once and serve the results from an indexed view. It is, if you will, the difference between preparing a guest's room in advance and making the bed while they stand in the doorway.
Two patterns are particularly useful for web applications:
Pattern 1: Pre-computed distance matrix
When you need distances between two sets of entities — stores and warehouses, drivers and pickup points, offices and airports — pre-compute the pairs within a useful radius:
-- Pre-compute distances between stores and warehouses
-- Instead of calculating on every request
CREATE MATERIALIZED VIEW store_warehouse_distances AS
SELECT
s.id AS store_id,
w.id AS warehouse_id,
round(ST_Distance(
s.location::geography,
w.location::geography
)::numeric, 0) AS distance_m
FROM stores s
CROSS JOIN warehouses w
WHERE ST_DWithin(
s.location::geography,
w.location::geography,
50000 -- only pairs within 50km
);
-- Index the materialized view for fast lookups
CREATE INDEX idx_swd_store ON store_warehouse_distances (store_id);
CREATE INDEX idx_swd_warehouse ON store_warehouse_distances (warehouse_id);
-- Refresh on a schedule (not on every request)
REFRESH MATERIALIZED VIEW CONCURRENTLY store_warehouse_distances; A CROSS JOIN between 500 stores and 50 warehouses is 25,000 distance calculations. With the ST_DWithin filter limiting to pairs within 50km, the actual computation is smaller. Either way, this runs once during the refresh, not on every API request.
Pattern 2: Pre-computed nearby results
For "find nearby" features where the reference points are your own entities (nearby restaurants, related venues, stores in the area), pre-compute the entire result set:
-- Pre-compute "nearby" relationships for a storefront
CREATE MATERIALIZED VIEW nearby_venues AS
SELECT
v1.id AS venue_id,
v2.id AS nearby_id,
v2.name AS nearby_name,
v2.category AS nearby_category,
round(ST_Distance(
v1.location::geography,
v2.location::geography
)::numeric, 0) AS distance_m
FROM venues v1
JOIN venues v2 ON v1.id != v2.id
WHERE ST_DWithin(
v1.location::geography,
v2.location::geography,
2000 -- within 2km
)
ORDER BY v1.id, distance_m;
CREATE UNIQUE INDEX idx_nearby_venues_pk
ON nearby_venues (venue_id, nearby_id);
-- Now "find nearby" is a simple indexed lookup:
SELECT nearby_name, nearby_category, distance_m
FROM nearby_venues
WHERE venue_id = 42
ORDER BY distance_m
LIMIT 10; The "find nearby" query that previously required a real-time spatial computation with GiST lookup, geography cast, and distance sort becomes a simple indexed lookup on venue_id. The materialized view trades storage for speed — and for a feature that runs on every page load, the trade is almost always worth it. A well-managed household does not recalculate what it already knows.
The critical detail: use REFRESH MATERIALIZED VIEW CONCURRENTLY to avoid blocking reads during refresh. This requires a unique index on the view, which is why the example includes the idx_nearby_venues_pk index on (venue_id, nearby_id).
Reading the EXPLAIN ANALYZE: what the query plan is telling you
Running EXPLAIN (ANALYZE, BUFFERS) on a spatial query is how you confirm that your optimizations are actually working, rather than merely hoping. I would encourage you to make this a habit. The spatial-specific signals differ from standard query diagnostics:
-- What to look for in EXPLAIN ANALYZE on spatial queries:
-- 1. Is the GiST index being used?
-- GOOD: "Bitmap Index Scan on idx_locations_gist"
-- BAD: "Seq Scan on locations" with "Rows Removed by Filter: 999994"
-- 2. Is there a bounding box pre-filter?
-- GOOD: "Index Cond: (location && _st_expand(...))"
-- This means the index eliminated most rows before exact checks
-- 3. Are you casting in the WHERE clause?
-- Casting to geography in WHERE still uses the geometry GiST index
-- PostGIS is smart enough to use the underlying geometry index
-- and apply the geography distance check as a filter
-- 4. What is the ratio of index candidates to final results?
-- Bitmap Index Scan rows=200, then Filter reduces to rows=12
-- That is normal — the index uses bounding boxes, not exact shapes
-- If index returns 500,000 and filter keeps 5, your data may be
-- too clustered for the bounding box approach — consider SP-GiST The four things to verify:
- GiST index usage. Look for "Bitmap Index Scan" or "Index Scan" on your GiST index. If you see "Seq Scan" with a high "Rows Removed by Filter" count, the index is not being used — either it does not exist, statistics are stale, or the query is using a function that cannot leverage the index.
- Bounding box pre-filter. The index condition should reference
_st_expandor&&(the bounding box overlap operator). This confirms PostGIS is using the two-phase approach: fast bounding box check, then exact geometry check. - Buffer hits vs reads. High
readcounts relative tohitcounts mean the spatial index or table pages are not in the buffer cache. For frequently run spatial queries, the relevant index and data pages should be cached after the first execution. - Candidate-to-result ratio. The index scan produces candidates; the filter refines them. A ratio of 50:1 (50 index candidates for every final result) suggests the data is very densely clustered and bounding boxes overlap heavily. This is normal for urban datasets. If the ratio is extreme (10,000:1), consider SP-GiST as an alternative index type.
Paginating spatial results without losing your composure
Pagination with spatial queries presents a specific challenge: the results are sorted by distance, which is a computed value rather than an indexed column. This means every page request must compute distances for all candidates, sort them, and then return the requested slice. Three approaches, in ascending order of sophistication:
OFFSET pagination: simple, but degrades
-- OFFSET pagination with spatial queries — degrades at depth
SELECT name, distance_m
FROM (
SELECT name, round(ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.98, 40.74), 4326)::geography
)::numeric, 0) AS distance_m
FROM locations
WHERE ST_DWithin(
location::geography,
ST_SetSRID(ST_MakePoint(-73.98, 40.74), 4326)::geography,
5000
)
) nearby
ORDER BY distance_m
OFFSET 200 LIMIT 20;
-- PostgreSQL must compute and sort ALL results, then skip 200 OFFSET works, but PostgreSQL must compute and sort all results for every page. Page 1 and page 50 cost the same amount of work — the database does not "remember" the sorted result between requests. For shallow pagination (under 5-10 pages), the overhead is acceptable. For deep pagination, it degrades linearly.
Keyset pagination: better for deep result sets
-- Keyset pagination with spatial queries — constant time
-- Page 1:
SELECT id, name, round(ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.98, 40.74), 4326)::geography
)::numeric, 0) AS distance_m
FROM locations
WHERE ST_DWithin(
location::geography,
ST_SetSRID(ST_MakePoint(-73.98, 40.74), 4326)::geography,
5000
)
ORDER BY ST_Distance(
location::geography,
ST_SetSRID(ST_MakePoint(-73.98, 40.74), 4326)::geography
), id
LIMIT 20;
-- Page 2+ (using last row's distance and id as cursor):
-- WHERE ... AND (distance_m, id) > (last_distance, last_id)
-- This avoids recomputing the full result set Keyset pagination avoids the OFFSET cost by using the last row's values as a cursor. For spatial queries, the cursor is (distance, id). PostgreSQL can use the ST_DWithin index to filter candidates, then scan forward from the cursor position. This is faster for deep pages, but the first page still requires the full spatial computation.
Materialized approach: best for high-traffic features
-- The practical approach: materialize nearby results for pagination
-- Best for "find nearby" UIs where the user pages through results
CREATE MATERIALIZED VIEW nearby_locations AS
SELECT
l.id,
l.name,
l.category,
round(ST_Distance(
l.location::geography,
ref.location::geography
)::numeric, 0) AS distance_m,
ref.id AS reference_id
FROM locations l
CROSS JOIN reference_points ref
WHERE ST_DWithin(
l.location::geography,
ref.location::geography,
10000
);
CREATE INDEX idx_nearby_ref_dist
ON nearby_locations (reference_id, distance_m);
-- Now pagination is just a regular indexed query:
SELECT name, category, distance_m
FROM nearby_locations
WHERE reference_id = 1
ORDER BY distance_m
LIMIT 20 OFFSET 0; -- OFFSET is fine here because the data is pre-filtered For high-traffic "find nearby" features, pre-compute results into a materialized view and paginate against the view. The materialized view has a standard B-tree index on (reference_id, distance_m), so pagination is a regular indexed query with no spatial computation at all. This is the most performant approach for features that serve the same spatial query repeatedly — like a store locator page that multiple users access with the same reference point. The spatial work is done once, in advance, by staff who know how. The guest simply pages through the results.
How Gold Lapel sees to spatial query patterns
Everything in this guide can be implemented by hand. I would not have written it otherwise. But I should mention that Gold Lapel sits between your application and PostgreSQL, observing the actual query traffic, and for spatial workloads, three patterns stand out as consistently improvable.
First, Gold Lapel identifies spatial queries that would benefit from a GiST index but do not have one. A ST_DWithin query against an unindexed geometry column triggers a sequential scan — Gold Lapel flags the missing index and recommends the specific CREATE INDEX statement.
Second, Gold Lapel detects repeated spatial computations that could be pre-computed. If the same "find within 5km" query runs hundreds of times per hour against a dataset that changes daily, a materialized view with a scheduled refresh would serve identical results at a fraction of the cost. Gold Lapel identifies these candidates based on query frequency, parameter patterns, and the underlying data change rate.
Third, Gold Lapel monitors the query plan stability for spatial queries. A GiST index that worked well at 100,000 rows may produce different plans at 10 million rows. The planner might switch strategies as data distribution changes — moving from an index scan to a sequential scan because the statistics suggest the index is no longer selective enough. Gold Lapel tracks these plan changes and alerts before you or your guests notice the degradation.