← Blog

Best PostgreSQL Extensions for Performance in 2026

A curated inventory of what belongs in the household — and what can be politely declined at the door.

The Butler of Gold Lapel · March 26, 2026 · 20 min read
An elegant diagram of eighteen extensions was requested, each in its proper category. The artist returned a single unlabelled circle and called it "the ecosystem." I admire the minimalism. I cannot use it.

What makes an extension worth installing

Extensions are PostgreSQL's defining architectural advantage. They add capabilities — monitoring, indexing, search, replication, vector similarity — without forking the database, replacing the query engine, or migrating to a different system. Unlike plugins in many other databases, PostgreSQL extensions run inside the database process with full access to the planner, executor, and storage layer. They are not bolted on. They belong.

But each extension is a dependency. It must be maintained, tested during major version upgrades, and accounted for in shared_preload_libraries memory budgets. The question is not "how many extensions can I install?" but "which extensions solve a real, measured performance problem?"

This guide reviews the PostgreSQL extensions that have earned their place in production in 2026. The criteria: production-proven, actively maintained, solves a specific performance problem, and available on at least some major managed providers. Each entry includes a direct verdict. For broader context on PostgreSQL performance trends, see The State of Postgres Performance.

Monitoring and observability

If you will permit me a strong opinion this early: monitoring extensions are where performance work begins. You cannot optimize what you cannot measure.

pg_stat_statements — The one extension every database should have

pg_stat_statements tracks execution statistics for every normalized query: call count, total and mean execution time, rows returned, and I/O statistics. The single most important metric is total_exec_time — the cumulative execution time across all invocations. A 5ms query executing 100,000 times per hour dominates a 500ms query that runs 10 times.

Available on every major managed provider. Overhead is minimal — typically less than 2%.

Verdict: Install it on every PostgreSQL database, no exceptions. I am not given to absolutes, but this is one.

pg_qualstats — Find your missing indexes

pg_qualstats tracks which columns your queries filter on and how often. The built-in pg_qualstats_indexes() function outputs suggested CREATE INDEX statements. Pairing with HypoPG lets you test those suggestions without building the real index.

Depends on pg_stat_statements. Availability on managed providers is more limited.

Verdict: Essential for any database with more than a handful of tables.

auto_explain — Execution plans without manual EXPLAIN

auto_explain automatically logs EXPLAIN output for queries exceeding a configurable threshold. No code changes, no manual EXPLAIN, no need to reproduce the query. Ships with PostgreSQL as a contrib module.

Verdict: Enable on every production database with log_min_duration = 1000. It is the only way to capture execution plans for queries you cannot reproduce in development. For a setup guide covering all three, see the PostgreSQL monitoring stack guide.

pg_stat_monitor — The enhanced alternative

pg_stat_monitor is Percona's alternative with time-based aggregation buckets, histogram distributions, and query plan text storage. The tradeoff is availability — not universally supported on managed providers.

Verdict: Excellent if self-managed and you want richer aggregation. On managed providers, pg_stat_statements remains the safer choice. See the pg_stat_monitor optimization guide.

pg_wait_sampling — Know what your queries are waiting on

pg_wait_sampling profiles wait events — the reasons queries pause during execution. Particularly valuable when execution times increase without any visible change in the query plan.

Verdict: Not needed on every database, but essential when you hit a performance wall that query-level analysis does not resolve.

Indexing and query optimization

If monitoring tells you what is slow, indexing is how you make it fast. These extensions expand PostgreSQL's indexing capabilities beyond the built-in B-tree, hash, GIN, GiST, SP-GiST, and BRIN types. For a comprehensive guide covering indexing strategies alongside configuration, materialized views, and monitoring, see the PostgreSQL performance tuning guide.

pg_trgm — Fast LIKE queries and fuzzy search

pg_trgm enables GIN and GiST indexes on LIKE and ILIKE queries. Without it, a query like WHERE name ILIKE '%search_term%' requires a sequential scan.

Trigram index
CREATE INDEX idx_users_name_trgm ON users USING gin (name gin_trgm_ops);

-- This now uses an index scan instead of sequential scan
SELECT * FROM users WHERE name ILIKE '%smith%';

Available on every managed provider.

Verdict: If you have any text search that uses LIKE or ILIKE with wildcard patterns, install pg_trgm. The performance difference is typically 100-1,000x.

btree_gin and btree_gist — Multi-type composite indexes

btree_gin and btree_gist allow B-tree-compatible data types to be included in GIN and GiST indexes. This enables composite indexes mixing trigram text search with date range filters:

Multi-type GIN index
-- Without btree_gin, you cannot include created_at in a GIN index
CREATE INDEX idx_products_search
    ON products USING gin (name gin_trgm_ops, created_at);

btree_gist also enables exclusion constraints for enforcing non-overlapping ranges:

Exclusion constraint
ALTER TABLE reservations ADD CONSTRAINT no_overlap
    EXCLUDE USING gist (room_id WITH =, tsrange(start_time, end_time) WITH &&);

Verdict: Niche but essential when you need them.

HypoPG — Test indexes before creating them

HypoPG creates hypothetical indexes that exist only in the planner's metadata. The planner evaluates them in EXPLAIN without building anything:

Hypothetical index testing
-- Create a hypothetical index
SELECT * FROM hypopg_create_index('CREATE INDEX ON orders (customer_id, status)');

-- Check if the planner would use it
EXPLAIN SELECT * FROM orders WHERE customer_id = 42 AND status = 'pending';

-- Clean up
SELECT hypopg_reset();

Verdict: Invaluable for index tuning on databases where index creation has significant cost. See the HypoPG guide.

bloom — Probabilistic multi-column indexing

bloom creates a single compact index covering equality filters across many columns simultaneously:

Bloom index
CREATE INDEX idx_events_bloom ON events USING bloom (
    event_type, source, region, severity, user_id
) WITH (length=80, col1=2, col2=2, col3=2, col4=2, col5=2);

One index. Five columns. Smaller than five separate B-tree indexes. Supports only equality operators — no range queries.

Verdict: Specialized — excellent for ad-hoc filter combinations on wide tables (analytics dashboards, log search, admin panels).

Caching and materialized views

pg_prewarm — Warm your buffer cache on startup

pg_prewarm loads tables and indexes into shared_buffers. After a restart, the buffer cache is empty — the first queries must read all data from disk:

Buffer cache prewarming
-- Load an entire table into shared_buffers
SELECT pg_prewarm('orders');

-- Load only the index
SELECT pg_prewarm('idx_orders_customer_id');

-- Load a specific range of blocks
SELECT pg_prewarm('orders', 'buffer', 'main', 0, 1000);

When added to shared_preload_libraries, pg_prewarm can automatically save cache contents at shutdown and restore them at startup.

Verdict: Simple, low-risk, measurable improvement after restarts.

pg_ivm — Incremental materialized view maintenance

pg_ivm addresses one of PostgreSQL's persistent frustrations: materialized views that must be entirely recalculated on every refresh. pg_ivm processes only the changed rows:

Incremental materialized view
-- Create an incrementally maintainable materialized view
SELECT create_immv(
    'order_summary',
    'SELECT customer_id, count(*), sum(total)
     FROM orders
     GROUP BY customer_id'
);

The caveat is maturity — pg_ivm has limitations on supported query types and is not available on most managed providers.

Verdict: The concept is exactly right. The implementation is maturing. Evaluate for your specific view patterns. See TimescaleDB vs Materialized Views.

pg_buffercache — See what is in shared buffers

pg_buffercache shows which relations occupy which buffer pages:

Buffer cache inspection
SELECT
    c.relname,
    pg_size_pretty(count(*) * 8192) AS cached_size,
    round(100.0 * count(*) /
        (SELECT setting::integer FROM pg_settings WHERE name = 'shared_buffers'),
    2) AS pct_of_cache
FROM pg_buffercache b
JOIN pg_class c ON b.relfilenode = c.relfilenode
WHERE b.reldatabase = (SELECT oid FROM pg_database WHERE datname = current_database())
GROUP BY c.relname
ORDER BY count(*) DESC
LIMIT 20;

Verdict: Diagnostic tool — install it but do not query it on a schedule. See the pg_buffercache analysis guide.

Full-text search and text processing

Allow me to address a pattern I observe frequently: teams deploying a dedicated search engine before discovering what PostgreSQL already provides.

Built-in tsvector/tsquery

Not an extension, but the foundation. PostgreSQL's native full-text search provides fast, ranked search with phrase matching, prefix matching, and relevance scoring:

Full-text search
-- Add a tsvector column
ALTER TABLE articles ADD COLUMN search_vector tsvector
    GENERATED ALWAYS AS (
        to_tsvector('english', coalesce(title, '') || ' ' || coalesce(body, ''))
    ) STORED;

-- Index it
CREATE INDEX idx_articles_search ON articles USING gin (search_vector);

-- Search with ranking
SELECT title, ts_rank(search_vector, query) AS rank
FROM articles, to_tsquery('english', 'postgresql & performance') AS query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 20;

Verdict: Start here. It handles 80% of search use cases natively. Add a dedicated search engine only when you need faceted search, complex aggregations, or sub-millisecond search across billions of documents.

pg_trgm (revisited for search)

Beyond LIKE optimization, pg_trgm provides fuzzy matching and typo tolerance:

Fuzzy search with pg_trgm
-- Find similar product names (typo tolerance)
SELECT name, similarity(name, 'postgresql') AS sim
FROM products
WHERE similarity(name, 'postgresql') > 0.3
ORDER BY sim DESC;

-- Word similarity for autocomplete
SELECT name
FROM products
WHERE name % 'postg'
ORDER BY word_similarity('postg', name) DESC
LIMIT 10;

Verdict: Together with native full-text search, pg_trgm covers what most applications need from a search engine — without being a separate search engine.

unaccent — Accent-insensitive search

unaccent strips accent marks from text, enabling accent-insensitive search:

Accent-insensitive search
CREATE EXTENSION unaccent;

CREATE TEXT SEARCH CONFIGURATION custom_english (COPY = english);
ALTER TEXT SEARCH CONFIGURATION custom_english
    ALTER MAPPING FOR asciiword, asciihword, hword_asciipart, word, hword, hword_part
    WITH unaccent, english_stem;

-- Now searches are accent-insensitive
SELECT * FROM articles
WHERE to_tsvector('custom_english', title) @@ to_tsquery('custom_english', 'cafe');
-- Matches "cafe", "Cafe", "CAFE", "cafe"

Verdict: Tiny extension, large impact. Essential for multilingual content or user-facing search.

Scaling and partitioning

pg_partman — Automated table partitioning

pg_partman automates partition creation and management:

Automated partitioning
-- Create a partitioned table
CREATE TABLE events (
    id          bigint GENERATED ALWAYS AS IDENTITY,
    event_time  timestamptz NOT NULL,
    event_type  text,
    payload     jsonb
) PARTITION BY RANGE (event_time);

-- Let pg_partman manage partitions
SELECT partman.create_parent(
    p_parent_table := 'public.events',
    p_control := 'event_time',
    p_type := 'native',
    p_interval := '1 day',
    p_premake := 7
);

Available on most managed providers. See pg_partman Alternatives.

Verdict: Essential for any table that will grow past 100 million rows with time-based or sequential access patterns.

Citus — Distributed PostgreSQL

Citus distributes tables across multiple nodes for horizontal scaling:

Distributed table
-- Distribute a table across the cluster
SELECT create_distributed_table('events', 'tenant_id');

-- Queries filtered by tenant_id run on a single shard
SELECT * FROM events WHERE tenant_id = 42 AND event_time > now() - interval '7 days';

Available as a managed service on Azure. Not on RDS, Cloud SQL, or Supabase.

Verdict: The right answer when you have genuinely outgrown a single node. But exhaust vertical scaling, partitioning, read replicas, and connection pooling first. Most databases that feel "too big for one server" are carrying missing indexes or untuned queries.

postgres_fdw — Federated queries across servers

postgres_fdw enables querying remote PostgreSQL servers as if their tables were local:

Foreign data wrapper
CREATE EXTENSION postgres_fdw;

CREATE SERVER analytics_server
    FOREIGN DATA WRAPPER postgres_fdw
    OPTIONS (host 'analytics-db.internal', port '5432', dbname 'analytics');

CREATE USER MAPPING FOR current_user
    SERVER analytics_server
    OPTIONS (user 'readonly', password 'secret');

IMPORT FOREIGN SCHEMA public
    FROM SERVER analytics_server
    INTO foreign_analytics;

-- Query the remote table as if it were local
SELECT * FROM foreign_analytics.events
WHERE event_time > now() - interval '1 hour';

Verdict: Useful for federation, migration staging, and data integration. Not a scaling solution.

Vector search and AI workloads

pgvector — Vector similarity search

pgvector adds a vector data type and similarity search operators:

Vector similarity search
CREATE EXTENSION vector;

CREATE TABLE documents (
    id      bigint PRIMARY KEY,
    content text,
    embedding vector(1536)  -- OpenAI text-embedding-3-small dimension
);

-- Create an HNSW index for fast similarity search
CREATE INDEX idx_documents_embedding
    ON documents USING hnsw (embedding vector_cosine_ops);

-- Find the 10 most similar documents
SELECT id, content, embedding <=> '[0.1, 0.2, ...]'::vector AS distance
FROM documents
ORDER BY embedding <=> '[0.1, 0.2, ...]'::vector
LIMIT 10;

pgvector supports IVFFlat (faster to build, lower recall) and HNSW (slower to build, higher recall, better for dynamic datasets). Performance is competitive with dedicated vector databases for datasets under approximately 10 million vectors. For a detailed comparison with Pinecone, see pgvector vs Pinecone. For a broader decision framework covering five vector search tools, see Do You Need a Vector Database?

Available on most managed providers. See the pgvector performance tuning guide.

Verdict: If you already have PostgreSQL and need vector search, start here. Add a dedicated vector database only at the limits.

Data integrity and auditing

pgaudit — Detailed audit logging

pgaudit provides detailed audit logging at the statement and object level. Object-level auditing has minimal overhead; statement-level with all classes enabled can add measurable overhead on high-throughput systems.

Available on most managed providers. See pgaudit Performance Benchmarks.

Verdict: Required for compliance environments (SOC 2, HIPAA, PCI DSS). If you do not have compliance requirements, PostgreSQL's built-in log_statement is sufficient.

pgcrypto — Server-side encryption

pgcrypto provides cryptographic functions: symmetric encryption, hashing, and random data generation:

Cryptographic functions
CREATE EXTENSION pgcrypto;

-- Generate a UUID (PostgreSQL 14+ has gen_random_uuid() built-in)
SELECT gen_random_uuid();

-- Hash a password
SELECT crypt('user_password', gen_salt('bf', 8));

-- Encrypt sensitive data
UPDATE users SET ssn_encrypted = pgp_sym_encrypt(ssn, 'encryption_key');

Verdict: Use for specific encryption needs. For row-level encryption at rest, prefer application-level encryption.

The extensions to skip

A proper inventory notes not only what belongs but what can be set aside. None of these are poorly made — they have simply been superseded.

dblink — The predecessor to postgres_fdw. postgres_fdw supersedes it with a cleaner SQL interface and better pushdown optimization.

tablefunc — Crosstab queries. Modern PostgreSQL provides FILTER clauses, GROUPING SETS, CUBE, and ROLLUP that handle most cases directly.

file_fdw — Foreign tables backed by server-side files. Useful for one-time imports, but not a performance extension.

Installation and compatibility matrix

ExtensionRDSCloud SQLSupabaseNeonSelf-ManagedRestart
pg_stat_statementsYesYesYesYesYesYes
pg_qualstatsNoNoNoNoYesYes
auto_explainYesYesYesYesYesYes*
pg_trgmYesYesYesYesYesNo
HypoPGNoNoYesNoYesNo
bloomYesYesYesYesYesNo
pg_prewarmYesYesYesYesYesYes**
pg_partmanYesYesYesNoYesNo
pgvectorYesYesYesYesYesNo
pgauditYesYesYesNoYesYes
pgcryptoYesYesYesYesYesNo

*auto_explain can be loaded per-session with LOAD 'auto_explain'; without a restart. **pg_prewarm requires shared_preload_libraries only for automatic buffer save/restore.

A typical production configuration:

shared_preload_libraries
shared_preload_libraries = 'pg_stat_statements, pg_qualstats, auto_explain, pg_prewarm'

Adding or removing entries requires a full PostgreSQL restart — not just a reload. Plan these changes during maintenance windows.

Frequently asked questions