pg_cron
A well-run household keeps to a schedule. So should your database.
If I may make an observation: the databases I find in the best condition are never the ones with the most powerful hardware. They are the ones where someone bothered to set up a schedule. pg_cron is a PostgreSQL extension that lets you schedule recurring SQL jobs directly inside the database using familiar cron syntax. It runs as a background worker, executing tasks like VACUUM, materialized view refreshes, data purges, and statistics resets on a schedule you define — without needing an external cron daemon, task queue, or orchestration tool.
What pg_cron does
pg_cron adds a cron-compatible job scheduler to PostgreSQL. You register jobs by providing a cron expression and a SQL command, and the extension's background worker executes them on schedule. Jobs run inside the database with full access to SQL — any statement you can run in a psql session, you can schedule with pg_cron.
The extension stores job definitions in the cron.job table and logs execution history to cron.job_run_details. It supports standard five-field cron syntax (minute, hour, day of month, month, day of week), can run multiple jobs in parallel, and ensures that only one instance of each job runs at a time — if a run overlaps with the next scheduled execution, the new run queues until the previous one completes.
pg_cron was created by the Citus team (now part of Microsoft) and is one of the most widely deployed PostgreSQL extensions. It is available on nearly every major managed PostgreSQL provider.
When to use pg_cron
Any recurring database task that you currently trigger from an external cron job, application timer, or manual process is a candidate for pg_cron.
- VACUUM and ANALYZE — schedule targeted maintenance on specific tables outside autovacuum's general sweep
- Materialized view refreshes — keep dashboard summaries and reporting views up to date on a fixed cadence
- Data retention and cleanup — purge old rows, archive logs, or delete expired sessions on a schedule
- Partition management — create future partitions or detach old ones ahead of time
- Statistics resets — periodically call
pg_stat_statements_reset()to keep monitoring windows clean - Health checks and notifications — run diagnostic queries and write results to an alerting table
In short: the daily rounds that keep a household running. The sort of work that is invisible when done well and catastrophic when neglected.
Installation and setup
pg_cron is a third-party extension (not part of PostgreSQL contrib). It must be loaded as a shared library at server startup, which requires a one-time restart. On managed providers, this is typically handled through a configuration flag or parameter group setting.
-- 1. Add to postgresql.conf (requires restart)
shared_preload_libraries = 'pg_cron'
-- 2. Restart PostgreSQL, then create the extension
CREATE EXTENSION pg_cron;
-- 3. Verify it's working
SELECT * FROM cron.job; One restart, one CREATE EXTENSION, and the household has a scheduler on staff.
By default, pg_cron installs into the postgres database. You can change this by setting cron.database_name in postgresql.conf before creating the extension. All schedules are evaluated in UTC unless you configure cron.timezone.
Cron syntax
pg_cron uses standard five-field cron expressions. If you have used cron on Linux, the syntax is identical.
-- Cron syntax: minute hour day-of-month month day-of-week
-- ┌──────── minute (0-59)
-- │ ┌────── hour (0-23)
-- │ │ ┌──── day of month (1-31)
-- │ │ │ ┌── month (1-12)
-- │ │ │ │ ┌ day of week (0-7, 0 and 7 are Sunday)
-- │ │ │ │ │
-- * * * * *
-- Examples:
-- */5 * * * * Every 5 minutes
-- 0 * * * * Every hour on the hour
-- 0 3 * * * Daily at 3:00 AM UTC
-- 0 0 * * 0 Every Sunday at midnight UTC
-- 0 0 1 * * First day of every month at midnight UTC
-- 30 2 * * 1-5 Weekdays at 2:30 AM UTC Scheduling jobs
The cron.schedule() function registers a new job. It takes a name, a cron expression, and a SQL command.
-- Schedule a VACUUM to run every night at 3:00 AM UTC
SELECT cron.schedule('nightly-vacuum', '0 3 * * *', 'VACUUM ANALYZE');
-- Refresh a materialized view every 15 minutes
SELECT cron.schedule('refresh-dashboard', '*/15 * * * *', 'REFRESH MATERIALIZED VIEW CONCURRENTLY dashboard_summary');
-- Delete old rows every Sunday at 2:00 AM UTC
SELECT cron.schedule('weekly-cleanup', '0 2 * * 0', $$DELETE FROM events WHERE created_at < now() - interval '90 days'$$);
-- Reset pg_stat_statements every Monday at midnight
SELECT cron.schedule('reset-stats', '0 0 * * 1', 'SELECT pg_stat_statements_reset()'); Four jobs. Four lines of SQL each. The database now attends to its own maintenance at the appointed hour, without being reminded.
Jobs begin executing on the next matching cron tick after they are created. The job name is a convenience label — use it to identify and unschedule jobs later.
Managing jobs and viewing history
All job definitions live in cron.job, and every execution is logged to cron.job_run_details. These tables are your primary interface for monitoring and troubleshooting.
-- List all scheduled jobs
SELECT jobid, schedule, command, nodename, database, username, active
FROM cron.job;
-- View recent job execution history
SELECT jobid, job_pid, status, return_message,
start_time, end_time, end_time - start_time AS duration
FROM cron.job_run_details
ORDER BY start_time DESC
LIMIT 20;
-- Disable a job without removing it
UPDATE cron.job SET active = false WHERE jobid = 1;
-- Remove a job by name
SELECT cron.unschedule('nightly-vacuum');
-- Remove a job by ID
SELECT cron.unschedule(1); Cross-database scheduling
pg_cron is installed in a single database, but cron.schedule_in_database() lets you target any database in the same PostgreSQL cluster.
-- Schedule a job to run in a different database
SELECT cron.schedule_in_database(
'vacuum-analytics', -- job name
'0 4 * * *', -- schedule: 4:00 AM UTC daily
'VACUUM ANALYZE', -- SQL command
'analytics' -- target database name
); Cleaning up job history
The cron.job_run_details table grows with every execution. For jobs that run frequently — every minute or every five minutes — this table can become large quickly. Schedule a self-cleaning job to keep it manageable. A scheduler that cannot tidy up after itself is, I'm afraid, an incomplete scheduler.
-- pg_cron logs every run to cron.job_run_details, which can grow large.
-- Schedule a self-cleaning job to purge old entries.
SELECT cron.schedule('purge-job-history', '0 5 * * *',
$$DELETE FROM cron.job_run_details WHERE end_time < now() - interval '7 days'$$
); Cloud availability
| Provider | Status |
|---|---|
| Amazon RDS / Aurora | Available — add to shared_preload_libraries in the parameter group |
| Google Cloud SQL | Available — enable via the cloudsql.enable_pg_cron database flag |
| Azure Database for PostgreSQL | Available — enable via the azure.extensions server parameter |
| Supabase | Available — enabled by default |
| Neon | Available — enable via dashboard; recommended on always-on computes only |
How Gold Lapel relates
Allow me a candid word about one of pg_cron's most popular duties. A great many teams use it to refresh materialized views on a fixed schedule — every 15 minutes, every hour, once a day. It works. But a fixed schedule is a compromise: refresh too often and you spend resources rebuilding data that hasn't changed; too rarely and your guests are reading yesterday's numbers.
Gold Lapel approaches this differently. Rather than refreshing on a clock, it watches actual query traffic and data change rates, then refreshes views precisely when they need it. Views stay fresh under load and idle when nobody is asking. The schedule, in effect, becomes adaptive.
I should note that this does not make pg_cron redundant — far from it. VACUUM schedules, data retention jobs, partition management, statistics resets — these are the routines of a well-maintained database, and pg_cron handles them admirably. Gold Lapel simply relieves it of the one job that benefits from something more responsive than a cron expression.