auto_explain
Automatically log execution plans for slow queries — because the problem rarely occurs while you are watching.
There is an uncomfortable truth about EXPLAIN ANALYZE: it only works when you already know which query to investigate, and when you happen to be present while the problem is occurring. Intermittent slowdowns do not keep appointments. auto_explain is a PostgreSQL contrib module that logs the execution plan for any statement exceeding a configurable duration threshold — automatically, continuously, whether you are watching or not. It catches the plans you would never think to ask for.
What auto_explain does
auto_explain hooks into PostgreSQL's executor and checks the execution time of every statement after it completes. If the statement took longer than the configured threshold (auto_explain.log_min_duration), the module logs the query's execution plan to the PostgreSQL server log.
By default, it logs the same output you would get from running EXPLAIN — the planner's estimated costs, row counts, and chosen access methods. With auto_explain.log_analyze enabled, it logs EXPLAIN ANALYZE output instead, which includes actual execution times, actual row counts, and buffer usage statistics.
The key advantage is patience. Slow queries are often intermittent — they appear under specific data distributions, parameter values, or concurrency conditions that vanish the moment you open a terminal to investigate. auto_explain does not blink. It captures the plan at the moment the problem occurs, giving you the exact execution plan that caused the slowdown, whether it happened at 3pm or 3am.
When to use auto_explain
auto_explain is most valuable when you need to understand why queries are slow, not just which ones. Specific use cases:
- Diagnosing intermittent slowdowns — queries that are usually fast but occasionally spike, often due to plan changes or parameter-dependent behavior
- Catching plan regressions — after a PostgreSQL upgrade, an ANALYZE, or a schema change, the planner may choose different plans for existing queries
- Debugging function internals — with
log_nested_statementsenabled, auto_explain logs plans for SQL executed inside PL/pgSQL functions, which are otherwise invisible - Production troubleshooting — load it into a single session to capture plans without any server restart or configuration change
- Complementing pg_stat_statements — pg_stat_statements identifies the slow queries; auto_explain shows you the plans behind them
Installation and setup
auto_explain is a contrib module that ships with PostgreSQL — no separate installation is needed. It can be loaded in two ways: server-wide via shared_preload_libraries (requires a restart), or per-session via the LOAD command (no restart, but requires superuser privileges).
Server-wide (persistent)
-- 1. Add to postgresql.conf (requires restart)
shared_preload_libraries = 'auto_explain'
-- 2. Configure the minimum duration threshold (in milliseconds)
auto_explain.log_min_duration = 250
-- 3. Restart PostgreSQL, then verify it's loaded
SHOW shared_preload_libraries; After the restart, auto_explain is active for all sessions. Any query exceeding the threshold will have its plan written to the PostgreSQL log. Three lines of configuration, and your database begins keeping a record of its own misbehaviour.
Per-session (temporary)
-- Load auto_explain for the current session only (no restart needed)
LOAD 'auto_explain';
-- Set the threshold for this session
SET auto_explain.log_min_duration = 100; This is useful for debugging a specific workload without affecting the entire server. The module is unloaded when the session ends.
Configuration parameters
auto_explain exposes several GUC parameters. All can be set in postgresql.conf for server-wide defaults, or changed per-session by superusers.
-- Key configuration parameters
auto_explain.log_min_duration = 250 -- log plans for queries slower than 250ms (-1 disables)
auto_explain.log_analyze = off -- include actual timings (EXPLAIN ANALYZE) — adds overhead
auto_explain.log_buffers = off -- include buffer usage statistics
auto_explain.log_timing = on -- per-node timing (only relevant when log_analyze is on)
auto_explain.log_verbose = off -- include verbose output (aliases, schema names)
auto_explain.log_format = text -- output format: text, xml, json, yaml
auto_explain.log_nested_statements = off -- log plans inside functions/procedures
auto_explain.log_level = LOG -- log level: DEBUG5..LOG
auto_explain.sample_rate = 1 -- fraction of queries to explain (1 = all, 0.1 = 10%) The log_analyze trade-off
With log_analyze = off (the default), auto_explain logs the planner's estimated plan — the same output as EXPLAIN without ANALYZE. This has negligible overhead because PostgreSQL already computed the plan before execution.
With log_analyze = on, auto_explain instruments every node in the execution plan to capture actual timings and row counts. This adds real overhead to every qualifying query — the executor must read the system clock at every plan node boundary. On workloads with many short queries or plans with many nodes, this can be significant.
For production, a practical approach is to enable log_analyze with a high duration threshold (e.g., 500ms or 1s) and a reduced sample_rate:
-- In production, sample a fraction of queries to reduce overhead
auto_explain.log_min_duration = 500 -- only queries over 500ms
auto_explain.log_analyze = on -- include actual timings
auto_explain.sample_rate = 0.1 -- explain 10% of qualifying queries Practical example
Here is what auto_explain output looks like when log_analyze and log_buffers are enabled. The plan appears in the PostgreSQL server log (not in the client session).
-- Enable EXPLAIN ANALYZE output for detailed timing
-- WARNING: this adds overhead — it instruments every node in the plan
SET auto_explain.log_analyze = on;
SET auto_explain.log_buffers = on;
SET auto_explain.log_min_duration = 100;
-- Run your query — the plan appears in the PostgreSQL log
SELECT o.id, o.total, c.name
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.created_at > now() - interval '7 days';
-- Example log output:
-- LOG: duration: 342.501 ms plan:
-- Nested Loop (cost=0.85..1234.56 rows=150 width=48)
-- (actual time=0.045..342.123 rows=147 loops=1)
-- Buffers: shared hit=892 read=34
-- -> Index Scan using orders_created_at_idx on orders o
-- (actual time=0.023..12.456 rows=147 loops=1)
-- Filter: (created_at > (now() - '7 days'::interval))
-- -> Index Scan using customers_pkey on customers c
-- (actual time=0.002..0.003 rows=1 loops=147) This tells you not just that the query was slow (342ms), but exactly where the time was spent — in this case, scanning the orders table. You can see the actual row counts, buffer hits and reads, and which indexes were used. All captured without anyone needing to be awake, alert, or in the right terminal at the right moment.
Cloud availability
| Provider | Status |
|---|---|
| Amazon RDS / Aurora | Available — add to shared_preload_libraries in the parameter group (requires reboot) |
| Google Cloud SQL | Available — enable via the cloudsql.enable_auto_explain database flag |
| Azure Database for PostgreSQL | Available — add to shared_preload_libraries via server parameters (requires restart) |
| Supabase | Available — can be loaded per-session with LOAD 'auto_explain' |
| Neon | Check provider documentation — shared_preload_libraries are managed by Neon |
How Gold Lapel relates
If I may draw the comparison: auto_explain is an excellent witness. It observes the problem and files a detailed report in the server log. Gold Lapel performs similar plan analysis at the proxy level — every query that passes through is analyzed, its execution plan captured, its behaviour tracked over time — but it does not stop at observation. It acts on what it finds, driving optimizations like materialized view creation and index recommendations.
There is also the matter of overhead. auto_explain with log_analyze enabled must instrument every node in the execution plan, which adds measurable cost to every qualifying query. Gold Lapel obtains plan information from PostgreSQL's planner without that executor-level instrumentation. The analysis happens at the proxy, not inside the engine.
I should be clear: auto_explain remains valuable alongside Gold Lapel. The server log output is useful for ad-hoc debugging, compliance, and those moments when you simply want to see the raw plan with your own eyes. Gold Lapel takes over where the log entry ends — turning plan analysis into action rather than leaving it as an exercise for the reader.