PostgreSQL Configuration Tuning: The 10 Settings That Actually Matter
Approximately 350 configuration parameters. Ten of them matter. Allow me to conduct a brief inventory.
Most of postgresql.conf does not matter
I should like to conduct a brief inventory, if you don't mind.
PostgreSQL ships with approximately 350 configuration parameters. The vast majority have no measurable effect on performance for typical workloads — they control niche behaviors, logging formats, locale settings, and internal bookkeeping that is already well-calibrated by default.
The internet is full of "ultimate PostgreSQL tuning guides" that list 30-50 settings to change. Most of those guides either modify parameters that have no measurable impact or change parameters that were already set appropriately. The result is a cluttered configuration file full of overrides that make the system harder to reason about without making it faster. This is not optimization — it is busywork.
This guide covers the 10 settings that consistently make a measurable difference. The order is deliberate — from highest impact to lowest. If you configure the first three settings correctly and stop there, you have captured the majority of available improvement from configuration tuning.
This guide complements the broader PostgreSQL performance tuning guide, which covers the full optimization landscape including indexing, query optimization, schema design, and monitoring. This article goes deep on configuration alone.
2. work_mem
What it does
work_mem controls memory allocated per-operation for internal sort operations, hash tables, and hash joins. This is not per-connection — it is per-operation within a query.
For a detailed explanation, see the work_mem glossary entry.
When a sort or hash exceeds the work_mem allocation, PostgreSQL spills to disk. A sort that completes in 5 ms in memory might take 500 ms when spilling to disk.
Default and recommended
Default: 4 MB. Recommended global default: 4-16 MB for OLTP workloads.
The challenge is the multiplication effect:
| work_mem | Connections | Ops/Query | Potential Total |
|---|---|---|---|
| 4 MB | 100 | 3 | 1.2 GB |
| 16 MB | 100 | 3 | 4.8 GB |
| 64 MB | 100 | 3 | 19.2 GB |
| 256 MB | 100 | 3 | 76.8 GB |
The proper pattern — and I consider this non-negotiable:
-- Keep the global default safe
ALTER SYSTEM SET work_mem = '8MB';
-- Raise per-session for specific analytical queries
SET LOCAL work_mem = '256MB';
SELECT ... complex analytical query ...;
-- work_mem reverts to default after the transaction How to verify
EXPLAIN (ANALYZE, BUFFERS) SELECT ... your query ...; Look for Sort Method: external merge Disk: (spilled to disk) versus Sort Method: quicksort Memory: (in-memory, good).
SELECT
datname,
temp_files,
pg_size_pretty(temp_bytes) AS temp_size
FROM pg_stat_database
WHERE datname = current_database(); Common mistakes
- Setting globally to 256 MB+ without calculating the multiplied effect
- Never checking whether the change helped — always run EXPLAIN before and after
- Ignoring hash joins — work_mem is not just for sorts
3. effective_cache_size
What it does
effective_cache_size is an estimate — not a memory allocation — that tells the query planner how much total memory is available for caching data. This includes both shared_buffers and the OS page cache.
Getting this wrong does not waste memory. But it produces suboptimal query plans. No cost, no risk, potentially substantial reward.
Default and recommended
Default: 4 GB. Recommended: 50-75% of total system memory.
| System Memory | effective_cache_size | Reasoning |
|---|---|---|
| 8 GB | 6 GB | 75% — most memory available for caching |
| 16 GB | 12 GB | 75% |
| 32 GB | 24 GB | 75% |
| 64 GB | 48 GB | 75% |
How to verify
EXPLAIN SELECT * FROM orders WHERE user_id = 'abc123'; If the planner chooses a sequential scan on a table with a relevant index and the data is largely cached, effective_cache_size may be too low.
Common mistakes
- Setting to 100% of RAM — the OS needs memory for processes and kernel buffers
- Confusing with shared_buffers — effective_cache_size does not allocate any memory
- Leaving at default on a large machine — a 64 GB machine with 4 GB effective_cache_size causes the planner to dramatically underestimate cache availability
4. maintenance_work_mem
What it does
maintenance_work_mem controls memory allocated for maintenance operations: VACUUM, CREATE INDEX, ALTER TABLE ADD FOREIGN KEY, CLUSTER, and REINDEX.
Default and recommended
Default: 64 MB. Recommended: 256 MB-1 GB.
| System Memory | maintenance_work_mem | Notes |
|---|---|---|
| 8 GB | 256 MB | Modest but effective |
| 16 GB | 512 MB | Good for medium tables |
| 32 GB | 1 GB | Handles large tables well |
| 64 GB | 1–2 GB | Diminishing returns above 1 GB for VACUUM |
The critical detail: autovacuum workers each use maintenance_work_mem. Consider setting autovacuum_work_mem separately:
-- Manual VACUUM and CREATE INDEX get 1 GB
ALTER SYSTEM SET maintenance_work_mem = '1GB';
-- Autovacuum workers each get 256 MB (3 workers = 768 MB max)
ALTER SYSTEM SET autovacuum_work_mem = '256MB'; For a comprehensive guide, see the autovacuum tuning guide and the PostgreSQL VACUUM guide.
5. max_connections
What it does
max_connections sets the maximum number of concurrent connections. Each connection is backed by a dedicated OS process — PostgreSQL uses a process-per-connection model. Each connection consumes approximately 5-10 MB per idle connection, more when active.
Default and recommended
Default: 100. Recommended: Keep at 100-200 and use a connection pooler.
-- postgresql.conf
max_connections = 100 -- backend connections
superuser_reserved_connections = 3 -- reserve for emergency admin access At high connection counts, several subsystems degrade: process scheduling overhead, lock contention, memory consumption, and snapshot management. The connection pooler is not an optimization at scale — it is a fundamental architectural component.
For a comprehensive guide, see the connection pooling guide and the PgBouncer comparison.
6. random_page_cost
What it does
random_page_cost is the planner's estimate of the cost of a random page read relative to a sequential page read. It controls the preference for index scans versus sequential scans.
Default and recommended
Default: 4.0 — calibrated for spinning disks. For SSD: 1.1-1.5. For NVMe: 1.0-1.1. For in-memory workloads: 1.0.
Allow me to be direct: this is one of the highest-impact single-setting changes available. Reducing random_page_cost from 4.0 to 1.1 on SSD storage frequently transforms query plans. One number. Dozens of improved plans.
For a detailed guide, see the fix sequential scans guide.
How to verify
-- Before: random_page_cost = 4.0
EXPLAIN SELECT * FROM orders WHERE status = 'pending';
-- May show: Seq Scan on orders ... (if planner thinks index scan is too expensive)
-- Change the setting
SET random_page_cost = 1.1;
-- After: random_page_cost = 1.1
EXPLAIN SELECT * FROM orders WHERE status = 'pending';
-- May show: Index Scan using idx_orders_status on orders ... If the plan improves, apply permanently:
ALTER SYSTEM SET random_page_cost = 1.1;
SELECT pg_reload_conf(); 7. checkpoint_completion_target
What it does
checkpoint_completion_target controls how PostgreSQL paces checkpoint I/O. At 0.9, writes are spread over 90% of the interval between checkpoints, producing smooth I/O.
For background, see the checkpoint glossary entry.
Default and recommended
Default: 0.9 (PostgreSQL 14+) — well-calibrated, leave it alone. Older versions (PostgreSQL 13 and earlier): Default was 0.5, which caused periodic I/O storms. Set to 0.9.
-- Check your version and current value
SELECT version();
SHOW checkpoint_completion_target;
-- If on PostgreSQL 13 or earlier with default 0.5:
ALTER SYSTEM SET checkpoint_completion_target = 0.9;
SELECT pg_reload_conf(); How to verify
SELECT
checkpoints_timed,
checkpoints_req,
checkpoint_write_time,
checkpoint_sync_time,
buffers_checkpoint,
buffers_backend
FROM pg_stat_bgwriter; A healthy system has checkpoints_timed >> checkpoints_req and low buffers_backend.
8. wal_buffers
What it does
wal_buffers controls memory for WAL data before it is flushed to disk. For background, see the WAL glossary entry.
Default and recommended
Default: -1 (auto-tuned to 1/32 of shared_buffers, capped at 16 MB). Recommended: Leave at -1. Once shared_buffers is 512 MB or larger, the auto-tuned wal_buffers hits the 16 MB cap, which is the practical maximum.
Manual override is only needed if shared_buffers is set very small. In most cases, the auto-tuning does its job quietly and well.
9. default_statistics_target
What it does
default_statistics_target controls the granularity of column statistics collected by ANALYZE. More histogram buckets mean finer-grained statistics, which produce better query plans.
Default and recommended
Default: 100. Recommended: 100-500 depending on data distribution. The setting can be applied per-column:
-- Raise statistics for a specific skewed column
ALTER TABLE orders ALTER COLUMN status SET STATISTICS 500;
-- Raise for a join column with many distinct values
ALTER TABLE events ALTER COLUMN user_id SET STATISTICS 300;
-- Re-analyze the table to collect new statistics
ANALYZE orders;
ANALYZE events; How to verify
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending'; Large divergences between estimated and actual rows (10x or more) indicate the planner needs better statistics for that column.
10. huge_pages
What it does
huge_pages controls whether PostgreSQL requests 2 MB pages instead of 4 KB pages from the OS. For large shared_buffers allocations, this significantly reduces TLB misses.
This setting is for self-hosted deployments only. Managed platforms (RDS, Supabase, Neon, Heroku) manage huge pages at the infrastructure level. See the Supabase optimization guide and RDS optimization guide for platform-specific tuning.
Default and recommended
Default: try. Recommended: on for self-hosted servers with shared_buffers >= 8 GB.
| shared_buffers | TLB Entries (4 KB) | TLB Entries (2 MB) | Benefit |
|---|---|---|---|
| 2 GB | 524,288 | 1,024 | Modest |
| 8 GB | 2,097,152 | 4,096 | Measurable (1–5% throughput) |
| 16 GB | 4,194,304 | 8,192 | Significant |
| 32 GB | 8,388,608 | 16,384 | Significant |
OS configuration required
# shared_buffers in bytes / 2 MB per huge page, plus overhead
# For 8 GB shared_buffers:
echo $((8 * 1024 / 2 + 100)) # = 4196 (extra 100 for overhead) # Set for current session
sudo sysctl -w vm.nr_hugepages=4196
# Make persistent across reboots
echo 'vm.nr_hugepages = 4196' | sudo tee -a /etc/sysctl.conf ALTER SYSTEM SET huge_pages = 'on';
-- Requires a restart A sensible starting configuration
For a 32 GB dedicated server (SSD)
# Memory
shared_buffers = '8GB' # 25% of 32 GB
work_mem = '8MB' # Conservative OLTP default
maintenance_work_mem = '1GB' # Fast VACUUM and index creation
autovacuum_work_mem = '256MB' # Per-worker limit for autovacuum
effective_cache_size = '24GB' # 75% of 32 GB
# Connections
max_connections = 100 # Use a connection pooler for more
superuser_reserved_connections = 3 # Emergency admin access
# Planner
random_page_cost = 1.1 # SSD storage
default_statistics_target = 100 # Raise per-column as needed
# WAL and Checkpoints
wal_buffers = -1 # Auto-tuned from shared_buffers (16 MB)
checkpoint_completion_target = 0.9 # Smooth checkpoint I/O
# Huge Pages (requires OS configuration)
huge_pages = 'on' # vm.nr_hugepages >= 4196 Scaling for different server sizes
For a 16 GB server:
shared_buffers = '4GB'
effective_cache_size = '12GB'
maintenance_work_mem = '512MB'
autovacuum_work_mem = '256MB'
# huge_pages = 'try' # Benefit is modest at 4 GB shared_buffers For a 64 GB server:
shared_buffers = '16GB'
effective_cache_size = '48GB'
maintenance_work_mem = '2GB'
autovacuum_work_mem = '512MB'
huge_pages = 'on' # vm.nr_hugepages >= 8292 After applying changes
The most important step after changing configuration is measurement. A configuration change without verification is not optimization — it is hope.
- Restart PostgreSQL for static parameters. Run
SELECT pg_reload_conf();for dynamic parameters. - Check
pg_stat_bgwriterfor checkpoint pressure. - Check
pg_stat_databasefor temporary file usage. - Run
EXPLAIN (ANALYZE, BUFFERS)on critical queries. - Monitor system memory — ensure the OS still has adequate memory for page cache.
If a setting change does not produce measurable improvement, revert it. A simpler configuration with fewer overrides is easier to maintain and debug. Simplicity is not the absence of capability — it is the discipline to change only what demonstrably matters.