Shared buffers
The single most important memory parameter in PostgreSQL — and the one most likely to still be set to its default. If I may, allow me to explain why that default is insufficient and what to do about it.
shared_buffers is the PostgreSQL configuration parameter that sets the size of the shared memory buffer cache — the memory pool where PostgreSQL stores recently accessed 8KB data pages so that repeated reads can be served from RAM instead of disk. Every backend process connected to the database shares this same memory segment. The default is 128MB, which is adequate for development and very little else. The standard recommendation is 25% of total system RAM, and it is one of the few rules of thumb in PostgreSQL that genuinely deserves its reputation.
What shared_buffers is
shared_buffers defines the size of a shared memory segment that PostgreSQL allocates at startup. This memory is divided into 8KB pages — the same page size PostgreSQL uses on disk — and every backend process reads from and writes to this shared pool.
When a query needs data from a table or index, PostgreSQL checks the buffer cache first. If the page is already resident (a buffer hit), the read completes in microseconds. If it is not (a buffer miss), PostgreSQL reads the page from the filesystem into the buffer cache, evicting an older page if necessary. The goal is pleasingly straightforward: keep the pages you read most often in memory so that most queries never wait for disk.
I should note that this is not a per-connection allocation. It is a single pool shared across all connections. A server with 200 connections and shared_buffers = 8GB has 8GB of buffer cache total, not 8GB per connection. This shared design is what makes it efficient — hot pages loaded by one connection are immediately available to every other connection. One well-stocked pantry, not two hundred private ones.
Sizing shared_buffers
The widely accepted starting point is 25% of total system RAM. On a 64GB server, that means 16GB. On a 4GB development machine, 1GB. This rule has persisted for years because it is genuinely sound — not every piece of conventional wisdom ages well, but this one has earned its keep.
# postgresql.conf examples by server RAM
# -----------------------------------------
# 4 GB server → shared_buffers = 1GB
# 16 GB server → shared_buffers = 4GB
# 64 GB server → shared_buffers = 16GB
# 128 GB server → shared_buffers = 32GB
# Minimum recommended for production
shared_buffers = 256MB Why 25% and not more? Because PostgreSQL is not the only thing using memory. The operating system maintains its own page cache that acts as a second tier of caching for PostgreSQL's data files. Other consumers include work_mem allocations for sorts and hash joins, connection overhead, and any other processes on the server. Setting shared_buffers to 50% or higher starves these other consumers and usually makes things worse, not better. Generosity with one resource at the expense of all others is not generosity — it is misallocation.
The minimum for any production workload is 256MB. Below that, the buffer cache is too small to hold even a modest working set. And yet the default remains 128MB. One does wonder who that default is intended to serve.
Diminishing returns set in well before you reach 40% of RAM. If your cache hit ratio is still low after setting shared_buffers to 25%, the problem is almost certainly not memory — it is queries that scan too much data. A missing index on a large table will defeat any amount of buffer cache you care to throw at it.
Checking your current setting
Use SHOW or query pg_settings directly.
-- Check the current shared_buffers value
SHOW shared_buffers;
-- Or query it from pg_settings for more detail
SELECT name, setting, unit, boot_val, reset_val, source
FROM pg_settings
WHERE name = 'shared_buffers'; shared_buffers is a postmaster-level parameter. Changing it requires editing postgresql.conf and restarting the server — a reload is not sufficient. PostgreSQL allocates the shared memory segment at startup, and its size is fixed for the lifetime of the process. I mention this upfront because discovering you need a restart after planning a live change is the sort of surprise no one enjoys.
Monitoring buffer usage
The key metric is the buffer cache hit ratio: the fraction of page reads served from memory rather than disk. PostgreSQL tracks this in pg_stat_database via the blks_hit and blks_read counters. This is the number I check first.
-- Buffer cache hit ratio from pg_stat_database
SELECT
datname,
blks_hit,
blks_read,
round(
blks_hit::numeric / nullif(blks_hit + blks_read, 0) * 100, 2
) AS hit_ratio
FROM pg_stat_database
WHERE datname = current_database(); For OLTP workloads, a hit ratio above 99% is typical and healthy. Between 95% and 99% warrants a closer look. Below 95% is the kind of number that suggests the working set does not fit in the cache — or, more often, that queries are reading far more data than they should.
To see how much of the buffer cache is actually occupied, use the pg_buffercache extension.
-- How much of shared_buffers is actually in use (requires pg_buffercache)
SELECT
count(*) AS total_buffers,
sum(case when reldatabase is not null then 1 else 0 end) AS used_buffers,
round(
sum(case when reldatabase is not null then 1 else 0 end)::numeric /
count(*) * 100, 2
) AS usage_pct
FROM pg_buffercache; If usage_pct is well below 100%, your buffer cache is larger than your working set and increasing it further would have no effect — a reassuring finding. If it is consistently at 100%, the cache is full and actively evicting pages, which may or may not be a problem depending on your hit ratio. Full is not inherently bad. Full with a poor hit ratio is.
How Gold Lapel relates
Gold Lapel does not tune shared_buffers directly — that is a server-level setting that depends on your hardware and only you can decide what belongs there. What Gold Lapel does is reduce the number of pages your queries need to read in the first place, which relieves pressure on the buffer cache from the other direction.
When Gold Lapel replaces an expensive aggregation with a read from a pre-computed materialized view, the query goes from scanning thousands of pages to reading a handful. When it identifies a missing index, a sequential scan that loaded an entire table into the cache becomes a targeted read of a few pages. In both cases, the buffer cache holds more of the data that matters and less of the data that was loaded once and promptly forgotten.
In my experience, reducing buffer pressure often has a larger impact than increasing shared_buffers, because it improves hit ratios for every query — not just the one that prompted the investigation.