PostgreSQL on Supabase: What the Defaults Leave on the Table
Full PostgreSQL access is a genuine advantage. Allow me to show you around.
Supabase gives you full PostgreSQL — allow me to show you around
Good evening. I understand you have built something on Supabase. Excellent taste.
Supabase is one of the few managed platforms that provides direct access to a PostgreSQL database — not an abstraction layer on top of one, not a proprietary query engine, but an actual PostgreSQL instance with full SQL access, extensions, functions, and configuration options. This is a genuine advantage, and one worth appreciating.
You can write raw SQL in the Supabase SQL editor, install extensions with CREATE EXTENSION, create custom functions and triggers, manage indexes, and configure per-session parameters. The dashboard and client libraries are convenient, but the real power of Supabase is the database underneath.
Most Supabase users underutilize this access — and that is not a criticism, merely an observation. The default Supabase configuration is optimized for getting started quickly, not for production performance at scale. The gap between "deployed on Supabase" and "optimized on Supabase" is significant, and closing it requires understanding both PostgreSQL fundamentals and the Supabase-specific layers that sit on top.
Those layers include:
- Auth — Supabase's authentication system, integrating with PostgreSQL via
auth.uid()andauth.jwt()functions - Realtime — change streaming via PostgreSQL logical replication
- Storage — file storage backed by PostgreSQL metadata tables
- Edge Functions — Deno-based serverless functions that connect to the database
- PostgREST — the REST API layer that translates HTTP requests into SQL queries
Each of these additions interacts with database performance. Understanding how they work — and where they introduce overhead — is what we are here to attend to.
Connection pooling with Supavisor
What Supavisor is
Supavisor is Supabase's built-in connection pooler. It replaced PgBouncer in 2024 and is written in Elixir, designed for multi-tenant connection pooling at scale. Every Supabase project gets a Supavisor endpoint automatically.
For background on why connection pooling matters and how different pooling modes work, see the connection pooling guide and the connection pooling glossary entry.
Supavisor supports two pooling modes:
- Transaction mode (port 6543): Connections are released back to the pool after each transaction completes. This is the default and suitable for most application workloads.
- Session mode (port 5432 via pooler): The connection persists for the entire client session. Necessary when your application relies on session-level features: prepared statements,
LISTEN/NOTIFY, session variables, or advisory locks.
The direct connection string (port 5432, non-pooler) bypasses Supavisor entirely. Use this for migrations, psql sessions, and administrative tasks — not for application traffic.
Connection limits by plan
| Plan | Direct Connections | Notes |
|---|---|---|
| Free | 60 | Shared compute |
| Pro | 60 | Same connection limit as Free; upgrade provides more compute |
| Team | 60–200 | Depends on compute add-on |
| Enterprise | Custom | Negotiated |
An important detail — and one I wish were more widely known: the Pro plan has the same 60-connection limit as the Free plan. The upgrade buys more CPU and memory, not more connections. This catches developers off guard when they upgrade specifically to solve connection issues.
If you are hitting connection limits:
- Use the Supavisor pooler endpoint for all application traffic. This is the single most effective change.
- Reduce connection idle time in your application. Close connections promptly, set reasonable idle timeouts.
- Consider a compute add-on before raising the connection limit. More compute headroom per connection usually matters more than more connections.
Configuring your application for Supavisor
The key distinction in every case: use the pooler connection string for application traffic, and the direct connection string only for migrations and admin tasks.
Next.js / Vercel:
# Application connections (API routes, server components)
DATABASE_URL="postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres"
# Migrations only
DIRECT_URL="postgresql://postgres.[ref]:[password]@aws-0-[region].supabase.com:5432/postgres" Prisma: Add ?pgbouncer=true to the pooler connection string. Supavisor supports the PgBouncer protocol flag for compatibility:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL") // pooler endpoint with ?pgbouncer=true
directUrl = env("DIRECT_URL") // direct connection for migrations
} Drizzle:
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
const client = postgres(process.env.DATABASE_URL); // pooler endpoint
const db = drizzle(client); Django:
# Django settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'HOST': 'aws-0-[region].pooler.supabase.com',
'PORT': '6543',
'DISABLE_SERVER_SIDE_CURSORS': True, # Required for transaction-mode pooling
}
} SQLAlchemy:
from sqlalchemy import create_engine
engine = create_engine(
"postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres",
pool_size=5,
max_overflow=10,
) Row Level Security — a performance conversation worth having
How RLS affects query performance
Row Level Security (RLS) is one of Supabase's core features. When enabled on a table, RLS policies are applied as additional WHERE clauses on every query. The PostgreSQL planner appends the policy's condition to the query before optimization, meaning RLS is integrated into the query plan.
This has performance implications that vary dramatically based on how the policies are written. The difference between a well-written and a poorly-written RLS policy is not incremental — it is orders of magnitude.
Well-written RLS policies add negligible overhead:
-- This policy is fast when user_id is indexed
CREATE POLICY "Users see own data" ON orders
FOR SELECT USING (user_id = auth.uid()); The planner sees this as SELECT ... FROM orders WHERE user_id = '<uuid>' AND <your original conditions>. If user_id has an index, this is an index scan.
Poorly-written RLS policies introduce serious overhead:
-- This policy is expensive: subquery executes per row access
CREATE POLICY "Team members see team data" ON documents
FOR SELECT USING (
team_id IN (SELECT team_id FROM team_members WHERE user_id = auth.uid())
); Writing performant RLS policies
Index the columns used in RLS policies. This is the single most important optimization for RLS performance:
CREATE INDEX idx_orders_user_id ON orders (user_id);
CREATE INDEX idx_documents_user_id ON documents (user_id);
CREATE INDEX idx_comments_user_id ON comments (user_id); Use auth.uid() and auth.jwt(). These Supabase functions are optimized and treated as constants by the planner:
-- Good: auth.uid() is a constant for the planner
CREATE POLICY "own_data" ON orders
FOR SELECT USING (user_id = auth.uid());
-- Good: JWT claim extraction
CREATE POLICY "org_data" ON orders
FOR SELECT USING (org_id = (auth.jwt() ->> 'org_id')::uuid); Avoid subqueries where possible. When unavoidable, use EXISTS instead of IN:
-- Better than IN: EXISTS can short-circuit
CREATE POLICY "team_data" ON documents
FOR SELECT USING (
EXISTS (
SELECT 1 FROM team_members
WHERE team_members.team_id = documents.team_id
AND team_members.user_id = auth.uid()
)
); Test RLS performance with EXPLAIN ANALYZE:
-- Simulate authenticated user context
SET LOCAL role = 'authenticated';
SET LOCAL request.jwt.claims = '{"sub": "your-user-uuid-here"}';
-- Now EXPLAIN ANALYZE will include RLS policy evaluation
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending'; For detailed guidance on interpreting these plans, see the EXPLAIN ANALYZE guide.
Consider materialized access control for complex permission structures:
-- Materialized view that precomputes who can access what
CREATE MATERIALIZED VIEW user_document_access AS
SELECT DISTINCT
tm.user_id,
d.id AS document_id
FROM team_members tm
JOIN team_permissions tp ON tp.team_id = tm.team_id
JOIN documents d ON d.org_id = tp.org_id
WHERE tp.permission = 'read';
CREATE UNIQUE INDEX idx_uda_user_doc
ON user_document_access (user_id, document_id);
-- Simple, fast RLS policy against the materialized view
CREATE POLICY "materialized_access" ON documents
FOR SELECT USING (
EXISTS (
SELECT 1 FROM user_document_access
WHERE user_document_access.user_id = auth.uid()
AND user_document_access.document_id = documents.id
)
);
-- Refresh when permissions change (via trigger or pg_cron)
REFRESH MATERIALIZED VIEW CONCURRENTLY user_document_access; When RLS is the wrong tool
RLS is excellent for multi-tenant applications where the tenant boundary is clean — one column, one policy, one index. For complex permission hierarchies involving role inheritance, organizational trees, and cross-entity sharing rules, the practical approach is to enforce the simple tenant boundary with RLS and handle fine-grained permission logic in the application layer.
Realtime performance
How Supabase Realtime works
Supabase Realtime uses PostgreSQL logical replication to stream database changes to connected clients over WebSocket connections. When you subscribe to changes on a table, Supabase creates a replication slot and uses a WAL decoder to read changes and broadcast them.
This architecture is elegant — it uses PostgreSQL's built-in replication infrastructure rather than application-level triggers or polling. But it has performance implications that scale with write volume.
Optimizing Realtime subscriptions
Subscribe only to the tables you need. Each subscription adds WAL processing overhead. Subscribing to tables "just in case" is the database equivalent of leaving every light in the manor on — it costs more than you realize.
Use filters to reduce broadcast volume:
const channel = supabase
.channel('room-messages')
.on(
'postgres_changes',
{
event: 'INSERT',
schema: 'public',
table: 'messages',
filter: 'room_id=eq.123'
},
(payload) => {
console.log('New message:', payload.new);
}
)
.subscribe(); Avoid Realtime on high-write tables. Tables that receive continuous writes — analytics events, logs, sensor data — generate enormous WAL volume. For high-frequency data, consider batch writes and publish summaries to a lower-volume table.
Monitor replication lag:
SELECT
slot_name,
active,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn)) AS lag_size
FROM pg_replication_slots
WHERE slot_type = 'logical'; Growing lag_size indicates the Realtime decoder is not keeping up with write volume.
Extensions and index management
Available extensions
Several extensions are particularly relevant for performance optimization:
- pg_stat_statements (enabled by default) — tracks execution statistics for all SQL statements. See the pg_stat_statements guide.
- pg_trgm — trigram-based similarity search, useful for
LIKEandILIKEqueries. - pgvector — vector similarity search. See the pgvector performance tuning guide.
- pg_cron (Pro plan and above) — scheduled tasks for materialized view refreshes and periodic maintenance.
- hypopg — test hypothetical indexes without creating them. See the HypoPG guide.
Index management on Supabase
Supabase does not create indexes automatically beyond primary keys and unique constraints. Index management is entirely the developer's responsibility — and one of the most common areas where Supabase applications leave performance on the table.
The gap is often invisible. Supabase's auto-generated REST API translates URL parameters into SQL queries. A request like GET /rest/v1/orders?user_id=eq.abc123&status=eq.pending becomes SELECT * FROM orders WHERE user_id = 'abc123' AND status = 'pending'. If neither column is indexed, this performs a sequential scan.
Create indexes for:
- Columns used in RLS policies
- Columns used in API filters (
?column=eq.valueparameters) - Columns used in
ORDER BYclauses (especially withLIMIT) - Foreign key columns (PostgreSQL does not automatically index foreign keys)
-- Common indexes for a typical Supabase application
CREATE INDEX CONCURRENTLY idx_orders_user_id ON orders (user_id);
CREATE INDEX CONCURRENTLY idx_orders_status ON orders (status);
CREATE INDEX CONCURRENTLY idx_orders_created_at ON orders (created_at DESC);
CREATE INDEX CONCURRENTLY idx_comments_post_id ON comments (post_id); Use CREATE INDEX CONCURRENTLY to avoid locking the table during index creation. Use EXPLAIN ANALYZE in the SQL editor to verify indexes are being used.
For advanced indexing techniques, see the guides on partial indexes and composite indexes.
Edge Functions and database performance
Supabase Edge Functions run geographically close to users but potentially far from the database. This creates latency characteristics that differ from a co-located backend server.
Minimize database round-trips. The round-trip latency between an edge location and the database region is typically 20-100ms. One query returning 10 rows is significantly faster than 10 queries returning 1 row each.
// Inefficient: multiple round-trips from edge to database
const user = await supabase.from('users').select('*').eq('id', userId).single();
const orders = await supabase.from('orders').select('*').eq('user_id', userId);
const prefs = await supabase.from('preferences').select('*').eq('user_id', userId); // Better: single round-trip with join
const { data } = await supabase
.from('users')
.select(`
*,
orders (*),
preferences (*)
`)
.eq('id', userId)
.single(); For more on avoiding N+1 patterns, see the N+1 queries guide.
Batch writes where possible:
// Inefficient: one insert per item
for (const item of items) {
await supabase.from('events').insert(item);
}
// Better: single bulk insert
await supabase.from('events').insert(items); Supabase's constraints — I should be forthcoming
Compute is tied to your plan. The CPU, memory, and I/O performance available to your database is determined by your plan tier and any compute add-ons. There is no autoscaling of the database itself.
The Free tier is genuinely constrained. Shared compute, 500MB storage, 60 connections. Excellent for development but not designed for production workloads with meaningful traffic.
Some configuration parameters are not directly modifiable. Settings like shared_buffers, work_mem, and random_page_cost are not exposed for direct modification. For per-session tuning, you can use SET commands. For background on what these settings do, see the PostgreSQL configuration tuning guide.
PostgREST adds overhead compared to direct SQL. Every API call passes through HTTP parsing, JWT validation, RLS evaluation, and PostgREST query generation before reaching PostgreSQL.
These are the trade-offs of a managed platform. They are reasonable, well-documented, and for most applications, the developer experience that Supabase provides is genuinely worth the limitations. Supabase is well-built software, improving rapidly. Credit where it is due.