How to Fix Slow PostgreSQL Queries After a Restart
The manor has gone cold. Your guests are shivering. Allow me to attend to the heating.
Why is PostgreSQL slow after a restart?
You restart PostgreSQL — for a version upgrade, a configuration change, a maintenance window — and the queries that ran in under a millisecond now take hundreds of milliseconds. Dashboards that loaded instantly now spin. Users notice. Monitoring alerts fire. I find this deeply embarrassing on behalf of everyone involved. Nothing is broken. The database is functioning correctly. It is simply cold — like a manor house after the heating has been off all night, with guests already arriving at the door.
PostgreSQL relies heavily on its shared buffer cache — a region of shared memory where frequently accessed data pages are kept in RAM. When a query needs a row, PostgreSQL checks the buffer cache first. If the page is there (a "hit"), the access takes nanoseconds. If the page is not there (a "miss"), PostgreSQL must read it from disk — or from the OS page cache, if you are fortunate — at a cost of microseconds to milliseconds per page. The difference is two to three orders of magnitude.
After a restart, the shared buffer cache is empty. Every single page access is a miss. Every query pays the full cost of reading from storage. On a system with 8 GB of shared_buffers and an active working set of 6 GB, that means roughly 786,000 pages (at 8 kB each) must be loaded from disk before the system reaches steady-state performance. On fast NVMe storage, this takes minutes. On network-attached EBS volumes, it can take considerably longer. Every room in the house, cold. Every fireplace, unlit.
The OS page cache provides partial relief. If the server was not actually rebooted — only the PostgreSQL process was restarted — the OS may still have data cached. But PostgreSQL must still copy those pages into its own shared buffers, incurring memory copies and buffer management overhead. And if the OS was rebooted too, both caches are cold, and every read goes to physical disk.
This is not a bug. It is the normal behavior of any system that relies on in-memory caching. The question is how long you allow your guests to shiver — and whether a well-prepared household can eliminate the chill entirely.
How do you diagnose a cold buffer cache?
Before reaching for solutions, if you'll permit me, let us first confirm that the cold cache is actually the problem. A proper household does not guess at the cause of a draught — it inspects. PostgreSQL provides the evidence you need in pg_stat_database.
-- Check your buffer cache hit ratio after a restart
-- A healthy system runs above 95%. After a restart, expect much lower.
SELECT
datname,
blks_hit,
blks_read,
CASE WHEN blks_hit + blks_read = 0 THEN 0
ELSE round(100.0 * blks_hit / (blks_hit + blks_read), 2)
END AS cache_hit_ratio
FROM pg_stat_database
WHERE datname = current_database(); Immediately after a restart, you will see something like this:
datname | blks_hit | blks_read | cache_hit_ratio
----------+----------+-----------+-----------------
myapp_db | 14208 | 892471 | 1.57 A cache hit ratio of 1.57%. I shall give that a moment to settle. That means 98.4% of page accesses are going to disk. For comparison, a warm system — one whose household is in proper order — typically looks like this:
datname | blks_hit | blks_read | cache_hit_ratio
----------+------------+-----------+-----------------
myapp_db | 247891042 | 184203 | 99.93 A healthy PostgreSQL system should sustain a buffer cache hit ratio above 95%. Values below 90% indicate either insufficient shared_buffers, a working set that exceeds available memory, or — after a restart — a cache that has not yet warmed up. The ratio will climb naturally as queries load pages into the cache, but the interim period is where your guests notice the cold. And a good household does not make its guests wait.
You can also see the cold cache at the individual query level with EXPLAIN (ANALYZE, BUFFERS):
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM orders WHERE customer_id = 42;
-- Cold cache — pages read from disk:
Index Scan using idx_orders_customer_id on orders
(cost=0.43..12.65 rows=8 width=64)
(actual time=4.218..4.892 rows=7 loops=1)
Index Cond: (customer_id = 42)
Buffers: shared hit=0 read=5
Planning Time: 0.12 ms
Execution Time: 4.941 ms -- Same query, warm cache — all pages in shared_buffers:
Index Scan using idx_orders_customer_id on orders
(cost=0.43..12.65 rows=8 width=64)
(actual time=0.021..0.035 rows=7 loops=1)
Index Cond: (customer_id = 42)
Buffers: shared hit=5 read=0
Planning Time: 0.08 ms
Execution Time: 0.052 ms The same query. The same data. The same index. A 98x difference in execution time, explained entirely by whether the pages were in memory. Buffers: shared hit=0 read=5 versus Buffers: shared hit=5 read=0 — that is the difference between a cold house and a warm one. One does not tolerate a 98x decline in service quality.
What does cold vs. warm performance actually look like?
I should like to present the evidence plainly, because the numbers speak with an authority that no amount of metaphor can match. The magnitude of the cold cache penalty depends on the query type, the data volume, and the storage subsystem. These are representative measurements from a 50 GB database with 8 GB of shared_buffers on NVMe storage:
| Query type | Cold cache | Warm cache | Improvement |
|---|---|---|---|
| Point lookup (index scan, 7 rows) | 4.9 ms | 0.05 ms | ~98x |
| Range scan (1K rows, composite index) | 38 ms | 1.2 ms | ~32x |
| Aggregation (100K rows, seq scan) | 842 ms | 47 ms | ~18x |
| Join (orders + customers, hash join) | 124 ms | 4.8 ms | ~26x |
| Dashboard query (3 subqueries) | 2,140 ms | 89 ms | ~24x |
I draw your attention to the point lookup: 4.9 ms cold, 0.05 ms warm. A 98x difference. That is not a performance variation — it is the difference between a response your users never perceive and one that stacks up across every page load until the entire application feels sluggish. The dashboard query is equally sobering: 2,140 ms cold versus 89 ms warm. Your users are staring at a spinner for two full seconds because no one warmed the house before they arrived.
These numbers assume NVMe storage. On spinning disks or high-latency network storage (EBS gp2, for example), the cold-cache penalties are significantly worse — 3-5x the values shown above. I mention this not to alarm, but because a butler who understates the situation is no butler at all.
How does pg_prewarm solve this?
Allow me to introduce the remedy. pg_prewarm is a contrib extension that ships with PostgreSQL — it has been part of the household since 9.4. It provides a function that loads relation data — tables, indexes, materialized views — into the shared buffer cache or OS page cache on demand. Rather than waiting for your guests to complain about the cold room by room, you light every fireplace before they arrive.
-- Install the extension (no restart required for manual prewarming)
CREATE EXTENSION IF NOT EXISTS pg_prewarm;
-- Verify it works
SELECT pg_prewarm('orders'); PostgreSQL 11 added the autoprewarm feature — a background worker that automatically saves and restores buffer cache contents across restarts. Think of it as a permanent member of staff who remembers which rooms were in use and warms them before anyone asks. I approve of this arrangement.
How do you use pg_prewarm manually?
The interface is pleasingly direct. The pg_prewarm() function accepts a relation name and returns the number of 8 kB blocks loaded into the cache.
-- Prewarm a table — returns the number of 8kB blocks loaded
SELECT pg_prewarm('orders');
-- Returns: 48721
-- Prewarm an index
SELECT pg_prewarm('idx_orders_customer_id');
-- Returns: 1842
-- Prewarm a materialized view
SELECT pg_prewarm('mv_daily_revenue');
-- Returns: 312 The three prewarming modes
pg_prewarm supports three modes that control where data is loaded. Each serves a different purpose.
-- 'buffer' mode (default): loads pages into PostgreSQL shared_buffers
-- Data is immediately available — no disk I/O on next query access
SELECT pg_prewarm('orders', 'buffer');
-- 'read' mode: loads pages into the OS page cache only
-- PostgreSQL still copies to shared_buffers on first access,
-- but the slow disk read is already done
SELECT pg_prewarm('orders', 'read');
-- 'prefetch' mode: issues asynchronous prefetch requests to the OS
-- Non-blocking, but less predictable than buffer mode
SELECT pg_prewarm('orders', 'prefetch'); Buffer mode is the right choice for most situations. It loads data directly into PostgreSQL's shared_buffers, which means the very next query that touches those pages will find them in RAM with zero disk I/O. This is the mode you want after a restart — the equivalent of having every room warm and ready the moment the first guest crosses the threshold.
Read mode is useful when shared_buffers is small relative to your working set. It populates the OS page cache, which provides a faster read path than cold disk even though PostgreSQL must still copy pages into its own buffers on first access. The hallway is warm; the guest must still walk to the drawing room. Better than a cold hallway.
Prefetch mode issues asynchronous I/O requests to the OS kernel. It is non-blocking — the function returns before the data is actually loaded — which makes it the least predictable of the three. It works well on Linux with readahead support, but the OS may or may not honor the prefetch requests depending on memory pressure and I/O scheduling. Use it when you want background warming without blocking your connection. I confess I find the lack of guarantees rather unsatisfying, but it has its place.
Partial prewarming and forks
For tables that are larger than shared_buffers, prewarming the entire relation would just evict pages you loaded moments ago. Instead, prewarm a specific range of blocks or a specific fork.
-- Prewarm only the first 2000 blocks of a large table
-- Useful when the table exceeds shared_buffers
SELECT pg_prewarm('orders', 'buffer', 'main', 0, 1999);
-- Prewarm the visibility map (critical for index-only scans)
SELECT pg_prewarm('orders', 'buffer', 'vm');
-- Check how many blocks a relation occupies before deciding
SELECT
relname,
pg_relation_size(oid) / 8192 AS blocks,
pg_size_pretty(pg_relation_size(oid)) AS size
FROM pg_class
WHERE relname IN ('orders', 'idx_orders_customer_id', 'idx_orders_created_at')
ORDER BY pg_relation_size(oid) DESC; The visibility map fork (vm) deserves particular attention — I would go so far as to say it is the single most underappreciated prewarming target. Index-only scans consult the visibility map to determine whether a heap fetch is necessary. If the visibility map is not in the cache, every index-only scan degrades to a regular index scan with heap fetches — which can double the I/O. Prewarming the visibility map is cheap (it is much smaller than the main fork) and disproportionately valuable. A small investment with an outsized return. The kind of detail a well-run household never overlooks.
What should you prewarm first?
Not every room in the manor requires the same attention. The goal is to warm the relations that your most frequent and most latency-sensitive queries depend on — the rooms your guests actually use. PostgreSQL, helpfully, tracks this information for you.
-- Find the tables and indexes that matter most to your workload
-- using pg_stat_user_tables and pg_stat_user_indexes
SELECT
schemaname || '.' || relname AS relation,
seq_scan + idx_scan AS total_scans,
pg_size_pretty(pg_relation_size(relid)) AS size
FROM pg_stat_user_tables
WHERE seq_scan + idx_scan > 0
ORDER BY seq_scan + idx_scan DESC
LIMIT 10; -- Find the most-used indexes
SELECT
schemaname || '.' || indexrelname AS index,
idx_scan AS scans,
pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
WHERE idx_scan > 0
ORDER BY idx_scan DESC
LIMIT 10; The priority order:
- Indexes first. A cold B-tree index means every query that depends on it pays the full penalty of walking the tree from disk. Indexes are typically much smaller than the tables they serve — a 50 MB index warms in under a second — and every query that uses it benefits. The return on prewarmed megabytes is highest for indexes. If I may be direct: this is where you begin. Always.
- Small, hot tables second. Lookup tables, configuration tables, and tables with fewer than a million rows that appear in many queries. These fit easily in shared_buffers and warm quickly.
- Critical large tables third. The main transactional tables — orders, events, users. If these are larger than shared_buffers, prewarm the most-queried partitions or the first N blocks that contain the most recently written data.
- Materialized views fourth. If your application queries materialized views at startup (dashboards, reports), prewarm them. They are read-heavy and often queried immediately.
- Visibility maps throughout. Prewarm the visibility map fork for any table that serves index-only scans.
A practical startup script
A well-prepared household does not rely on memory alone. Codify your prewarming targets in a script that runs after every restart.
-- Run after every restart to warm critical relations
-- Returns a summary of what was loaded
SELECT
relation,
pg_prewarm(relation) AS blocks_loaded,
pg_size_pretty(pg_relation_size(relation::regclass)) AS size
FROM (VALUES
('orders'),
('customers'),
('idx_orders_customer_id'),
('idx_orders_created_at'),
('idx_orders_status'),
('mv_daily_revenue')
) AS t(relation); The output tells you exactly how much data was loaded and how large each relation is on disk. If the total exceeds your shared_buffers, reduce the list — you are evicting earlier entries as you load later ones.
How does autoprewarm work?
Manual prewarming requires you to know which relations to warm — you must curate the list yourself. Autoprewarm removes that burden entirely. It is, if you will forgive the comparison, the permanent staff member who observes which rooms are in use, notes them down at regular intervals, and has them all warmed before the household stirs the next morning. I am quite fond of autoprewarm.
-- Add to postgresql.conf (requires a restart to take effect)
shared_preload_libraries = 'pg_prewarm'
-- Configuration (these are the defaults)
pg_prewarm.autoprewarm = true
pg_prewarm.autoprewarm_interval = 300 -- seconds between cache snapshots The mechanics:
- When autoprewarm is enabled, a background worker starts and periodically dumps the list of cached blocks to
$PGDATA/autoprewarm.blocks. The default interval is 300 seconds (5 minutes). - The file is also written at clean shutdown.
- After a restart, the autoprewarm worker reads
autoprewarm.blocksand loads those same blocks back into shared_buffers using two background workers. The loading happens in database-by-database, relation-by-relation order, sorted by block number for sequential I/O. - The server is fully operational during restoration — queries can run while the cache fills in. Relations that have already been restored perform at full speed; relations still in the queue perform at cold-cache speed until their turn arrives.
-- After a restart with autoprewarm enabled, check the server log:
-- LOG: autoprewarm successfully loaded 42891 of 42891 previously cached blocks The evidence is satisfyingly clear. In benchmarks by EDB, systems with autoprewarm reached peak TPS immediately after restart, while systems without autoprewarm required approximately 300 seconds — five full minutes of degraded service — to reach the same throughput level. Five minutes during which your guests are noticing the cold. The improvement is most dramatic for workloads where the hot data set fits within shared_buffers.
Managing autoprewarm manually
Even the most capable staff member benefits from the occasional direct instruction. You can interact with the autoprewarm worker directly when needed.
-- Manually dump current buffer contents to autoprewarm.blocks
SELECT autoprewarm_dump_now();
-- Manually start the autoprewarm worker
-- (useful if it was not started at boot)
SELECT autoprewarm_start_worker(); autoprewarm_dump_now() is useful before a planned restart — it ensures the snapshot is current rather than relying on the periodic dump that may be up to 5 minutes old. I would consider it a matter of basic courtesy to run this immediately before shutting down the server. The future instance will thank you.
How do you verify the cache is warm?
Trust, but verify. After prewarming, you want confirmation that the data is actually in the cache — that the fires are lit and the rooms are genuinely warm, not merely scheduled to be. The pg_buffercache extension provides a direct view into shared_buffers.
-- Install pg_buffercache to inspect what is in the cache
CREATE EXTENSION IF NOT EXISTS pg_buffercache;
-- See which relations occupy the most buffer cache space
SELECT
c.relname,
count(*) AS buffers,
pg_size_pretty(count(*) * 8192) AS cached_size,
round(100.0 * count(*) / (
SELECT setting::int FROM pg_settings WHERE name = 'shared_buffers'
), 2) AS pct_of_cache
FROM pg_buffercache b
JOIN pg_class c ON c.relfilenode = b.relfilenode
WHERE b.reldatabase = (SELECT oid FROM pg_database WHERE datname = current_database())
AND c.relname NOT LIKE 'pg_%'
GROUP BY c.relname
ORDER BY buffers DESC
LIMIT 15; This shows you exactly which tables and indexes occupy cache space and what percentage of the total cache each one consumes. After prewarming, the relations you targeted should appear near the top of this list.
For a quick before-and-after check on a specific table:
-- Before prewarming: check a specific table's presence in cache
SELECT
count(*) AS cached_blocks,
pg_size_pretty(count(*) * 8192) AS cached_size
FROM pg_buffercache b
JOIN pg_class c ON c.relfilenode = b.relfilenode
WHERE c.relname = 'orders';
-- cached_blocks | cached_size
-- ---------------+-------------
-- 0 | 0 bytes -- After pg_prewarm('orders'):
-- cached_blocks | cached_size
-- ---------------+-------------
-- 48721 | 381 MB Using pg_buffercache_summary for a quick overview
On PostgreSQL 16 and later, pg_buffercache_summary() provides a fast overview without the per-buffer overhead.
-- PostgreSQL 16+: a faster summary without per-buffer detail
SELECT * FROM pg_buffercache_summary();
-- buffers_used | buffers_unused | buffers_dirty | buffers_pinned | usagecount_avg
-- --------------+----------------+---------------+----------------+----------------
-- 42891 | 22645 | 1204 | 0 | 3.42 The usagecount_avg column is informative: higher values mean the cached pages are being actively accessed. After a fresh prewarm, usage counts start low and climb as queries touch the data. A low average immediately after prewarming is expected — it means the data is cached but has not yet been accessed by production queries.
What about failover and replica promotion?
Planned restarts are not the only occasion on which the manor goes cold. Failovers, replica promotions, and new replicas joining a cluster all present the same problem — the newly active server starts with whatever happened to be in its cache, which may not match the production workload. An unplanned transfer of duties, and the new primary is unprepared for the guests it has suddenly inherited.
After a failover, the promoted replica may have a warm cache for replication traffic (which tends to be sequential) but a cold cache for the random-access patterns of direct client queries. The working set shifts, and the cache needs to catch up.
-- After promoting a replica or failing over to a standby,
-- the new primary starts with whatever was in its cache.
-- A targeted prewarm script fills the gaps:
-- 1. Connect to the new primary
-- 2. Run the same startup script used for planned restarts
\i /var/lib/postgresql/scripts/prewarm_critical.sql
-- 3. Verify the cache hit ratio is climbing
SELECT
round(100.0 * blks_hit / (blks_hit + blks_read), 2) AS cache_hit_ratio
FROM pg_stat_database
WHERE datname = current_database(); For automated failover setups (Patroni, pg_auto_failover, cloud provider failover), consider adding the prewarm script to the promotion callback. The sooner the cache is warm, the sooner the new primary performs at full speed. No guest should ever notice that the staff has changed.
How does this work on managed PostgreSQL services?
Managed services add a layer of abstraction — someone else runs the household, as it were — but pg_prewarm remains broadly available.
Amazon RDS and Aurora PostgreSQL
pg_prewarm is supported as a standard extension. For autoprewarm, add pg_prewarm to shared_preload_libraries in a custom parameter group and reboot the instance. Aurora PostgreSQL also supports the extension and benefits particularly from prewarming after auto-scaling events when new reader instances start cold.
-- On RDS/Aurora: add pg_prewarm to shared_preload_libraries
-- via a custom parameter group, then reboot the instance.
-- After reboot, autoprewarm restores the cache automatically.
-- Verify autoprewarm is active:
SELECT name, setting
FROM pg_settings
WHERE name LIKE 'pg_prewarm%'; Google Cloud SQL
pg_prewarm is available as a supported extension. Enable it with CREATE EXTENSION for manual prewarming. For autoprewarm, set the shared_preload_libraries database flag.
-- On Cloud SQL: pg_prewarm is available as a supported extension.
-- Manual prewarming works out of the box:
CREATE EXTENSION IF NOT EXISTS pg_prewarm;
SELECT pg_prewarm('orders');
-- For autoprewarm, set the shared_preload_libraries flag
-- via the database flags in the Cloud Console or gcloud CLI. Azure Database for PostgreSQL
Manual prewarming is supported. Autoprewarm availability depends on the service tier and configuration — check the server parameters for shared_preload_libraries editability.
Serverless and ephemeral environments
On serverless PostgreSQL services (Neon, Supabase with connection pooling), the cache behavior differs fundamentally. Compute instances may be suspended and resumed, potentially invalidating cached state. pg_prewarm is still useful for manual warming after a cold start, but autoprewarm may not function as expected when the file system is ephemeral. I should be forthright: in these environments, the connection pooler's warm-up behavior matters more than PostgreSQL's buffer cache. The heating system is not yours to manage.
Putting it together: a complete prewarming setup
A properly managed household does not rely on a single arrangement. For production deployments, combine autoprewarm with a manual prewarm script as a safety net. Autoprewarm — your permanent staff member — handles the common case of a clean restart. The manual script handles the situations that require a more personal touch: crash recovery, failover, new replicas.
Step 1: Enable autoprewarm
Add pg_prewarm to shared_preload_libraries in postgresql.conf and restart. This handles automatic cache restoration for all future clean shutdowns and restarts.
Step 2: Create a manual prewarm script
Identify your critical tables and indexes, and write a script that warms them.
#!/bin/bash
# prewarm.sh — run after every PostgreSQL restart
# Warms critical tables and indexes, logs the results
PGDATABASE="myapp_db"
psql -d "$PGDATABASE" -Aqt <<'SQL'
SELECT 'Prewarming started at ' || now();
SELECT
relation,
pg_prewarm(relation) AS blocks,
pg_size_pretty(pg_relation_size(relation::regclass)) AS size
FROM (VALUES
('orders'),
('customers'),
('products'),
('idx_orders_customer_id'),
('idx_orders_created_at'),
('idx_orders_status'),
('idx_products_category'),
('mv_daily_revenue')
) AS t(relation);
SELECT 'Cache hit ratio: ' || round(
100.0 * blks_hit / nullif(blks_hit + blks_read, 0), 2
) || '%'
FROM pg_stat_database
WHERE datname = current_database();
SELECT 'Prewarming complete at ' || now();
SQL Step 3: Automate the script
Hook the script into your restart process. On a Linux system with systemd, a oneshot service that runs after PostgreSQL starts is clean and reliable.
# /etc/systemd/system/pg-prewarm.service
# Runs the prewarm script after PostgreSQL starts
[Unit]
Description=PostgreSQL cache prewarming
After=postgresql.service
Requires=postgresql.service
[Service]
Type=oneshot
User=postgres
ExecStart=/var/lib/postgresql/scripts/prewarm.sh
StandardOutput=journal
[Install]
WantedBy=multi-user.target Step 4: Monitor the warm-up
After each restart, check the buffer cache hit ratio in your monitoring system. It should climb from near-zero to above 95% within the first few minutes if autoprewarm and your manual script are both performing their duties. If it takes longer, your prewarm targets may not cover the actual working set — revisit the pg_stat_user_tables and pg_stat_user_indexes queries to update them. The goal, always, is that no guest ever notices a restart occurred.
Limitations worth knowing
I should be candid about the boundaries, because pretending a tool has none would be a disservice to you. pg_prewarm is not a universal solution to all cache-related performance problems.
- Prewarmed pages have no special protection. PostgreSQL's clock-sweep eviction algorithm treats prewarmed pages the same as any other cached page. If the buffer cache is under pressure, prewarmed pages will be evicted to make room for new data. Prewarming is most valuable immediately after a restart, before the natural workload begins competing for cache space.
- Prewarming causes I/O load. Loading 8 GB of data into shared_buffers means reading 8 GB from storage. On a system that is already under I/O pressure, prewarming can make things temporarily worse — warming the house by opening every gas valve at once, as it were. Schedule it during low-traffic periods or throttle it by warming one relation at a time with a brief pause between each.
- It does not help with the OS page cache. Buffer mode fills shared_buffers. Read mode fills the OS page cache. Neither helps the other. If your working set exceeds shared_buffers and relies on the OS cache for overflow, you may need both modes — buffer for the most critical data, read for the next tier.
- The autoprewarm file can become stale. If the workload shifts significantly between the last dump and the restart, autoprewarm will restore the old working set rather than the current one. This is rarely a problem in practice — workload shifts are usually gradual — but it is worth knowing.
- Partitioned tables require per-partition prewarming. The parent of a partitioned table is not a physical relation. You must prewarm each partition individually.
How Gold Lapel complements pg_prewarm
pg_prewarm operates at the physical layer — loading raw data pages into the buffer cache so PostgreSQL's executor finds them in RAM instead of on disk. It ensures the manor has heat. Gold Lapel operates at the logical layer: maintaining pre-computed result sets and optimized query paths so that the expensive queries need not fully execute in the first place. The rooms are warm, and the answers to your most common questions are already prepared on a silver tray.
After a restart, pg_prewarm repopulates the buffer cache with the pages your queries need. Gold Lapel ensures its materialized views and cached results are fresh, so repeated query patterns are served from prepared answers rather than re-executing the full work from the newly warmed pages. One warms the raw materials. The other reduces the number of times you need to use them.
Both address the same concern — queries are slow when the data they need is not ready — but from complementary angles. pg_prewarm is valuable for any PostgreSQL deployment, regardless of what sits in front of it. Gold Lapel adds a layer that reduces total work, warmed cache or not. I would be a poor butler indeed if I suggested you needed both. But I would be equally poor if I did not mention that they work rather well together.