← PostgreSQL Concepts

Slow query log

Before the extensions, before the dashboards, there was the log. It remains the most direct way to find queries that need attention.

Concept · March 21, 2026 · 8 min read

PostgreSQL can log every query that exceeds a duration threshold. Set log_min_duration_statement to a value in milliseconds — say, 500 — and any query that takes longer than 500ms is written to the server log with its full SQL text and actual execution time. This is the slow query log. It requires no extensions, no third-party tools, and no restart to enable. One setting, one reload, and PostgreSQL begins telling you exactly which queries are misbehaving. I am fond of tools that ask so little and give so much.

What the slow query log is

The slow query log is not a separate log file or a dedicated feature with its own interface. It is a behavior of PostgreSQL's standard logging system, controlled by the log_min_duration_statement parameter. When this parameter is set to a positive value, PostgreSQL measures the wall-clock duration of every statement and writes a log entry for any statement that exceeds the threshold.

Each log entry includes the duration in milliseconds and the full SQL text of the query, including literal parameter values. This distinction matters. Unlike pg_stat_statements, which normalizes queries by replacing parameters with placeholders, the slow query log captures the exact query as it was sent. You can see which specific user IDs, date ranges, or filter values triggered the slow execution. The evidence is concrete, not aggregated away.

The default value of log_min_duration_statement is -1, which disables duration-based logging entirely. Setting it to 0 logs every query regardless of duration — useful for short debugging sessions, but rather impractical in production. The log fills quickly, and a full log is no more useful than an empty one.

Configuring the slow query log

The core setting is log_min_duration_statement. It accepts a value in milliseconds and can be changed without a restart — a courtesy that not every PostgreSQL parameter extends.

SQL
-- Log all queries that take longer than 500ms
ALTER SYSTEM SET log_min_duration_statement = 500;

-- Apply without restart
SELECT pg_reload_conf();

-- Verify the setting
SHOW log_min_duration_statement;

Several related settings control what appears in each log entry and how the logs are structured.

log_line_prefix

Controls the metadata prepended to each log line. A well-chosen prefix includes the timestamp, process ID, username, and database name — enough context to filter and correlate log entries without burying the information you actually came for.

postgresql.conf
-- A useful log_line_prefix for slow query analysis
-- In postgresql.conf:
log_line_prefix = '%t [%p] %u@%d '

-- %t  = timestamp
-- %p  = process ID
-- %u  = user name
-- %d  = database name

-- For more context, include application name and query duration:
log_line_prefix = '%t [%p] %u@%d %a '

log_statement

log_statement is a separate setting that logs queries by type (DDL, DML, or all) regardless of duration. It serves a different purpose — auditing, not performance analysis. For slow query work, leave it at none and let log_min_duration_statement do the filtering.

postgresql.conf
-- log_statement controls which statements are logged regardless of duration
-- Options: none, ddl, mod, all
-- In postgresql.conf:

log_statement = 'none'    -- default: only log via log_min_duration_statement
log_statement = 'ddl'     -- log CREATE, ALTER, DROP statements
log_statement = 'mod'     -- log DDL + INSERT, UPDATE, DELETE
log_statement = 'all'     -- log every statement (very verbose)

-- log_statement = 'all' is useful for debugging but generates enormous logs
-- For slow query analysis, leave it at 'none' and rely on log_min_duration_statement

CSV log format

Switching to csvlog format produces structured, machine-parseable log files that can be loaded into a PostgreSQL table for SQL-based analysis. This is especially useful when you want to aggregate slow queries by database, user, or time window.

postgresql.conf + SQL
-- Enable CSV log format for structured parsing
-- In postgresql.conf:
log_destination = 'csvlog'
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d.log'

-- CSV logs can be loaded into a table for SQL analysis:
CREATE TABLE postgres_log (
  log_time timestamp,
  user_name text,
  database_name text,
  process_id integer,
  connection_from text,
  session_id text,
  session_line_num bigint,
  command_tag text,
  session_start_time timestamp,
  virtual_transaction_id text,
  transaction_id bigint,
  error_severity text,
  sql_state_code text,
  message text,
  detail text,
  hint text,
  internal_query text,
  internal_query_pos integer,
  context text,
  query text,
  query_pos integer,
  location text,
  application_name text
);

COPY postgres_log FROM '/path/to/pg_log/postgresql-2026-03-21.csv' WITH CSV;

Choosing a threshold

Choosing a threshold is a judgment call, and I find that reasonable people arrive at different answers depending on their workload and their tolerance for log volume. Allow me to suggest three common starting points.

ThresholdBest forLog volume
1000msNoisy, high-throughput systems. Catches only the worst offenders.Low
500msGeneral-purpose starting point. Good balance of signal and volume.Moderate
100msLatency-sensitive applications. Aggressive optimization.Higher

The practical approach is to start high and lower the threshold over time. Set it to 1000ms, fix everything that shows up, then lower to 500ms. Fix those, lower to 200ms, and so on. Each round surfaces a new layer of queries that were previously hidden beneath worse offenders. You stop lowering when the queries that appear are fast enough for your requirements or when the log volume becomes unmanageable. The process has a satisfying rhythm to it — like polishing silver, each pass reveals what the previous one could not.

On managed database services where log storage costs money, be especially mindful of thresholds below 100ms — they can generate gigabytes of logs per day on active systems.

Analyzing slow query logs

Raw log files are useful for spot-checking but impractical for systematic analysis. If you intend to do more than glance at the occasional offender, you will want proper tools.

pgBadger

pgBadger is the standard tool for PostgreSQL log analysis. It parses log files — standard, syslog, or CSV format — and generates HTML reports with query statistics, histograms, graphs, and rankings of the slowest queries. It normalizes query text (replacing literals with placeholders) to group similar queries, much like pg_stat_statements does in memory.

Run it against a day's logs and you get a clear picture of which query patterns dominate execution time, when they peak, and how they distribute across databases and users.

grep patterns

For quick, targeted analysis without additional tools, grep remains entirely serviceable. It lacks ceremony, but it gets the job done.

Shell
# Find all slow query entries in the PostgreSQL log
grep "duration:" /var/log/postgresql/postgresql-16-main.log

# Find queries slower than 1 second (four digits before the decimal)
grep -E "duration: [0-9]{4,}\." /var/log/postgresql/postgresql-16-main.log

# Find slow queries from a specific database
grep "duration:" /var/log/postgresql/postgresql-16-main.log | grep "mydb"

# Count slow queries per hour
grep "duration:" /var/log/postgresql/postgresql-16-main.log \
  | cut -d' ' -f1,2 | cut -d: -f1,2 | sort | uniq -c | sort -rn

auto_explain

The slow query log tells you what was slow. auto_explain tells you why — and knowing what without knowing why is rather like being told which room has the leak without being told where the water is coming from. This extension automatically logs the execution plan for queries that exceed a duration threshold, the same information you would get from running EXPLAIN ANALYZE manually, but captured automatically for every slow query in production.

postgresql.conf
-- auto_explain: automatically log execution plans for slow queries
-- In postgresql.conf:
shared_preload_libraries = 'auto_explain'

-- Log EXPLAIN output for queries over 500ms
auto_explain.log_min_duration = 500

-- Include ANALYZE-style actual row counts and timing
auto_explain.log_analyze = on

-- Include buffer usage statistics
auto_explain.log_buffers = on

-- Nest subplan output for readability
auto_explain.log_nested_statements = on

-- Requires a restart for shared_preload_libraries
-- Other settings can be changed with pg_reload_conf()

With auto_explain enabled alongside the slow query log, each slow query entry is followed by its full execution plan. You can see whether a sequential scan replaced an expected index scan, whether row estimates were wildly off, or whether a nested loop ran millions of iterations. This eliminates the need to reproduce slow queries manually in order to diagnose them.

Slow query log vs pg_stat_statements

I am occasionally asked which of these to choose, and the honest answer is that the question itself is slightly wrong. They answer different questions, and both belong in your diagnostic toolkit.

Slow query logpg_stat_statements
GranularityIndividual executions with exact parametersAggregated patterns with normalized SQL
StorageDisk (log files)Memory (shared memory)
ThresholdOnly logs queries above a duration thresholdTracks all queries regardless of duration
ParametersFull SQL with literal valuesNormalized with $1, $2 placeholders
HistoryAs long as logs are retainedSince last reset or server restart
SetupOne GUC setting, no restartShared preload library, requires restart

Use pg_stat_statements to identify which query patterns consume the most total database time — it captures the full workload, including fast queries that run millions of times. Use the slow query log to investigate specific executions — the exact parameters, the exact timestamp, and (with auto_explain) the exact plan that made a particular execution slow.

Consider: a query that averages 5ms but occasionally spikes to 3 seconds will appear unremarkable in pg_stat_statements but will show up clearly in the slow query log with the specific parameters that triggered the slow path. Each tool sees what the other misses. This is precisely why you want both.

How Gold Lapel relates

Gold Lapel captures query timing data at the proxy level — every query passes through it before reaching PostgreSQL, so it records the same duration information that the slow query log would capture, but without requiring log parsing or file I/O. It sees the full SQL text with parameters, measures execution time, and identifies the slow outliers automatically.

This means you get slow query visibility without configuring log_min_duration_statement, without managing log rotation, and without running pgBadger or grep. Gold Lapel surfaces the slow queries directly in its analysis, correlated with execution plans and optimization recommendations. If you already have the slow query log configured, nothing changes — Gold Lapel works alongside it. But if you have not gotten around to setting it up, Gold Lapel has been attending to the matter quietly on your behalf.

Frequently asked questions