pg_stat_monitor
If pg_stat_statements is a summary, this is the full dossier — time-bucketed histograms, per-query execution plans, and client attribution, built by Percona.
I have long maintained that a household cannot be properly managed without knowing what happened in the last five minutes — not merely what has happened since the house was built. pg_stat_monitor is a PostgreSQL extension developed by Percona that shares this philosophy. It collects query performance statistics with more granularity than pg_stat_statements, organising data into time buckets rather than accumulating from server start. It captures actual execution plans per query, records client IP and application name, and provides response time histograms. It can run alongside pg_stat_statements or replace it entirely.
What pg_stat_monitor does
pg_stat_monitor intercepts every query the PostgreSQL server processes — just like pg_stat_statements — but organizes the collected statistics differently. Instead of a single cumulative row per query pattern, it breaks statistics into configurable time windows called buckets. Each bucket (default: 5 minutes) holds independent statistics, giving you a sliding window of recent query performance without needing to manually reset counters.
Beyond time bucketing, the extension captures data that pg_stat_statements does not: the actual execution plan text for each query, the client IP address and application name that originated the query, the command type (SELECT, INSERT, UPDATE, DELETE), and a response time histogram showing the distribution of execution times rather than just min/max/mean.
The result is a single view — pg_stat_monitor — where each row represents a distinct combination of query pattern, time bucket, client, and execution plan. This multi-dimensional grouping lets you answer questions like "did this query get slower in the last 10 minutes?" or "which application is generating the most load?" directly from SQL. The sort of questions that, in my experience, arrive at 2 a.m. and deserve better answers than "I'll check in the morning."
When to use pg_stat_monitor
pg_stat_monitor is most valuable when you need more context than pg_stat_statements provides. Specific scenarios:
- Diagnosing recent regressions — time buckets let you compare query performance across 5-minute windows without resetting statistics
- Correlating queries with clients — the
client_ipandapplication_namecolumns identify which service or connection pool is generating problematic queries - Capturing execution plans automatically — instead of manually running EXPLAIN on suspected slow queries, pg_stat_monitor stores the actual plan used for each execution
- Understanding latency distribution — the response time histogram reveals whether a query's high average is caused by consistent slowness or occasional spikes
- Tracking plan changes — the
planidcolumn detects when PostgreSQL switches to a different execution plan for the same query, which often explains sudden performance changes
If you only need basic "what are my slowest queries" visibility and want the lightest possible overhead, pg_stat_statements alone is sufficient. It is a fine tool and I would not have you replace it without cause. Add pg_stat_monitor when you need the time dimension, plan capture, or client attribution — when, that is, you need not just the what but the when and the who.
Installation and setup
pg_stat_monitor is a third-party extension developed by Percona. It is not included in the PostgreSQL contrib modules, so it must be installed separately — from Percona's repositories, from source, or via PGXN.
Like pg_stat_statements, it must be loaded via shared_preload_libraries, which requires a PostgreSQL restart when enabling it for the first time.
-- 1. Add to postgresql.conf (requires restart)
shared_preload_libraries = 'pg_stat_monitor'
-- 2. Restart PostgreSQL, then create the extension
CREATE EXTENSION pg_stat_monitor;
-- 3. Verify it's working
SELECT count(*) FROM pg_stat_monitor; If you want to run both extensions simultaneously, list pg_stat_statements first:
-- Running alongside pg_stat_statements (order matters)
shared_preload_libraries = 'pg_stat_statements, pg_stat_monitor' After installation, the extension begins tracking immediately with sensible defaults. One restart, one CREATE EXTENSION, and your database starts keeping a proper logbook. The most commonly tuned parameter is pgsm_bucket_time, which controls how long each time bucket lasts (default: 300 seconds).
Configuration
All configuration parameters are prefixed with pg_stat_monitor. and can be viewed from the pg_settings view. Some require a server restart; others can be changed with a reload.
-- View current pg_stat_monitor configuration
SELECT name, setting, short_desc
FROM pg_settings
WHERE name LIKE 'pg_stat_monitor%';
-- Key parameters (set in postgresql.conf, restart required for some)
-- pg_stat_monitor.pgsm_max = 256MB (shared memory limit)
-- pg_stat_monitor.pgsm_max_buckets = 10 (number of time buckets)
-- pg_stat_monitor.pgsm_bucket_time = 300 (seconds per bucket)
-- pg_stat_monitor.pgsm_query_max_len = 2048 (max query text length)
-- pg_stat_monitor.pgsm_enable = yes (enable/disable monitoring)
-- pg_stat_monitor.pgsm_track_utility = yes (track utility commands)
-- pg_stat_monitor.pgsm_query_plan = no (store execution plans) The most important parameters:
- pgsm_bucket_time (default: 300s) — duration of each time bucket. Shorter buckets give finer granularity but consume more memory.
- pgsm_max_buckets (default: 10) — number of buckets retained. With 300s buckets and 10 buckets, you get a 50-minute sliding window.
- pgsm_query_plan (default: no) — whether to capture and store execution plans. Adds overhead; enable during active investigation.
- pgsm_max (default: 256MB) — shared memory allocated. Increase if you track a very large number of distinct query patterns.
- pgsm_query_max_len (default: 2048) — maximum stored query text length. Increase for applications that generate very long SQL.
Key columns
The pg_stat_monitor view includes all the timing and I/O columns from pg_stat_statements, plus additional columns for bucketing, plans, client attribution, and histograms.
-- Key columns beyond what pg_stat_statements provides
-- Bucketing
bucket -- bucket number for this stats row
bucket_start_time -- when this time bucket began
-- Query identity
queryid -- normalized query hash
planid -- execution plan hash (distinct plans per query)
query_plan -- actual execution plan text
-- Client attribution
client_ip -- IP address of the client
application_name -- application_name from the connection
-- Timing (same as pg_stat_statements)
total_exec_time -- cumulative execution time (ms)
mean_exec_time -- average execution time per call (ms)
min_exec_time -- fastest execution (ms)
max_exec_time -- slowest execution (ms)
stddev_exec_time -- standard deviation of execution time
-- Histogram
resp_calls -- response time histogram buckets (array)
-- Metadata
cmd_type -- command type (SELECT, INSERT, UPDATE, DELETE)
elevel -- error level (0 = success)
message -- error or warning message text Practical queries
Top queries in the current time bucket
See which queries consumed the most time in the most recent monitoring window — no need to reset statistics first.
-- Top queries by total execution time in the current bucket
SELECT
bucket_start_time,
calls,
round(total_exec_time::numeric, 2) AS total_ms,
round(mean_exec_time::numeric, 2) AS mean_ms,
query
FROM pg_stat_monitor
WHERE bucket_start_time = (
SELECT max(bucket_start_time) FROM pg_stat_monitor
)
ORDER BY total_exec_time DESC
LIMIT 10; Queries with captured execution plans
When pgsm_query_plan is enabled, retrieve the stored plans for the slowest queries without needing to re-run them with EXPLAIN.
-- Retrieve the actual execution plan stored for a query
SELECT
calls,
round(mean_exec_time::numeric, 2) AS mean_ms,
query,
query_plan
FROM pg_stat_monitor
WHERE query_plan IS NOT NULL
ORDER BY mean_exec_time DESC
LIMIT 5; Load by client and application
Identify which applications or services are generating the most database load. In a household with multiple staff, it pays to know who is ringing the bell most often.
-- Break down query load by client IP and application
SELECT
client_ip,
application_name,
count(*) AS distinct_queries,
sum(calls) AS total_calls,
round(sum(total_exec_time)::numeric, 2) AS total_ms
FROM pg_stat_monitor
GROUP BY client_ip, application_name
ORDER BY total_ms DESC; Response time histograms
The resp_calls column contains an array representing the distribution of response times across configurable histogram buckets. An average of 50ms sounds perfectly respectable until the histogram reveals that half the executions finish in 2ms and the other half take 100ms. Averages, I'm afraid, are accomplished liars.
-- View response time histogram for a specific query
SELECT
query,
calls,
resp_calls,
round(mean_exec_time::numeric, 2) AS mean_ms
FROM pg_stat_monitor
WHERE calls > 100
ORDER BY mean_exec_time DESC
LIMIT 5; Cloud availability
| Provider | Status |
|---|---|
| Amazon RDS / Aurora | Not available — feature request pending with AWS |
| Google Cloud SQL | Not available — use pg_stat_statements instead |
| Azure Database for PostgreSQL | Not available — use pg_stat_statements instead |
| Supabase | Available — enable via the extensions dashboard |
| Neon | Not available — use pg_stat_statements instead |
As a third-party extension, pg_stat_monitor has limited availability on managed platforms compared to the contrib-bundled pg_stat_statements. It is fully supported on self-managed PostgreSQL installations and in Percona's own distribution. If your provider does not offer it, pg_stat_statements remains the standard alternative — less detail, but universally available. One works with what the household provides.
How Gold Lapel relates
I should be transparent about the overlap. Gold Lapel monitors every query at the proxy level — normalising, tracking execution statistics across time windows, and acting on what it finds. Time-bucketed statistics, plan awareness, client attribution: these are capabilities Gold Lapel provides as a matter of course, without an extension to install or a restart to schedule.
Where pg_stat_monitor earns its place is as a second vantage point. Gold Lapel observes from the proxy; pg_stat_monitor observes from inside PostgreSQL itself. The two perspectives are not redundant — they are complementary. Queries that bypass the proxy, time spent in PostgreSQL's planner, internal background operations: these are things only a database-side view can surface.
If you run both, you have two independent accounts of the same workload: one that acts on what it sees, and one that provides the full internal record. I find that arrangement rather reassuring. A well-run household benefits from more than one pair of attentive eyes.