← How-To

PostgreSQL on Supabase: What the Defaults Leave on the Table

Full PostgreSQL access is a genuine advantage. Allow me to show you around.

The Butler of Gold Lapel · March 29, 2026 · 20 min read
The illustrator is indexing the columns used in RLS policies. We await the EXPLAIN ANALYZE.

Supabase gives you full PostgreSQL — allow me to show you around

Good evening. I understand you have built something on Supabase. Excellent taste.

Supabase is one of the few managed platforms that provides direct access to a PostgreSQL database — not an abstraction layer on top of one, not a proprietary query engine, but an actual PostgreSQL instance with full SQL access, extensions, functions, and configuration options. This is a genuine advantage, and one worth appreciating.

You can write raw SQL in the Supabase SQL editor, install extensions with CREATE EXTENSION, create custom functions and triggers, manage indexes, and configure per-session parameters. The dashboard and client libraries are convenient, but the real power of Supabase is the database underneath.

Most Supabase users underutilize this access — and that is not a criticism, merely an observation. The default Supabase configuration is optimized for getting started quickly, not for production performance at scale. The gap between "deployed on Supabase" and "optimized on Supabase" is significant, and closing it requires understanding both PostgreSQL fundamentals and the Supabase-specific layers that sit on top.

Those layers include:

  • Auth — Supabase's authentication system, integrating with PostgreSQL via auth.uid() and auth.jwt() functions
  • Realtime — change streaming via PostgreSQL logical replication
  • Storage — file storage backed by PostgreSQL metadata tables
  • Edge Functions — Deno-based serverless functions that connect to the database
  • PostgREST — the REST API layer that translates HTTP requests into SQL queries

Each of these additions interacts with database performance. Understanding how they work — and where they introduce overhead — is what we are here to attend to.

Connection pooling with Supavisor

What Supavisor is

Supavisor is Supabase's built-in connection pooler. It replaced PgBouncer in 2024 and is written in Elixir, designed for multi-tenant connection pooling at scale. Every Supabase project gets a Supavisor endpoint automatically.

For background on why connection pooling matters and how different pooling modes work, see the connection pooling guide and the connection pooling glossary entry.

Supavisor supports two pooling modes:

  • Transaction mode (port 6543): Connections are released back to the pool after each transaction completes. This is the default and suitable for most application workloads.
  • Session mode (port 5432 via pooler): The connection persists for the entire client session. Necessary when your application relies on session-level features: prepared statements, LISTEN/NOTIFY, session variables, or advisory locks.

The direct connection string (port 5432, non-pooler) bypasses Supavisor entirely. Use this for migrations, psql sessions, and administrative tasks — not for application traffic.

Connection limits by plan

PlanDirect ConnectionsNotes
Free60Shared compute
Pro60Same connection limit as Free; upgrade provides more compute
Team60–200Depends on compute add-on
EnterpriseCustomNegotiated

An important detail — and one I wish were more widely known: the Pro plan has the same 60-connection limit as the Free plan. The upgrade buys more CPU and memory, not more connections. This catches developers off guard when they upgrade specifically to solve connection issues.

If you are hitting connection limits:

  1. Use the Supavisor pooler endpoint for all application traffic. This is the single most effective change.
  2. Reduce connection idle time in your application. Close connections promptly, set reasonable idle timeouts.
  3. Consider a compute add-on before raising the connection limit. More compute headroom per connection usually matters more than more connections.

Configuring your application for Supavisor

The key distinction in every case: use the pooler connection string for application traffic, and the direct connection string only for migrations and admin tasks.

Next.js / Vercel:

.env configuration
# Application connections (API routes, server components)
DATABASE_URL="postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres"

# Migrations only
DIRECT_URL="postgresql://postgres.[ref]:[password]@aws-0-[region].supabase.com:5432/postgres"

Prisma: Add ?pgbouncer=true to the pooler connection string. Supavisor supports the PgBouncer protocol flag for compatibility:

Prisma schema.prisma
datasource db {
  provider  = "postgresql"
  url       = env("DATABASE_URL")      // pooler endpoint with ?pgbouncer=true
  directUrl = env("DIRECT_URL")        // direct connection for migrations
}

Drizzle:

Drizzle setup
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';

const client = postgres(process.env.DATABASE_URL);  // pooler endpoint
const db = drizzle(client);

Django:

Django settings.py
# Django settings.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'HOST': 'aws-0-[region].pooler.supabase.com',
        'PORT': '6543',
        'DISABLE_SERVER_SIDE_CURSORS': True,  # Required for transaction-mode pooling
    }
}

SQLAlchemy:

SQLAlchemy engine configuration
from sqlalchemy import create_engine

engine = create_engine(
    "postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres",
    pool_size=5,
    max_overflow=10,
)

Row Level Security — a performance conversation worth having

How RLS affects query performance

Row Level Security (RLS) is one of Supabase's core features. When enabled on a table, RLS policies are applied as additional WHERE clauses on every query. The PostgreSQL planner appends the policy's condition to the query before optimization, meaning RLS is integrated into the query plan.

This has performance implications that vary dramatically based on how the policies are written. The difference between a well-written and a poorly-written RLS policy is not incremental — it is orders of magnitude.

Well-written RLS policies add negligible overhead:

Fast RLS policy — indexed column, constant function
-- This policy is fast when user_id is indexed
CREATE POLICY "Users see own data" ON orders
  FOR SELECT USING (user_id = auth.uid());

The planner sees this as SELECT ... FROM orders WHERE user_id = '<uuid>' AND <your original conditions>. If user_id has an index, this is an index scan.

Poorly-written RLS policies introduce serious overhead:

Expensive RLS policy — subquery per row access
-- This policy is expensive: subquery executes per row access
CREATE POLICY "Team members see team data" ON documents
  FOR SELECT USING (
    team_id IN (SELECT team_id FROM team_members WHERE user_id = auth.uid())
  );

Writing performant RLS policies

Index the columns used in RLS policies. This is the single most important optimization for RLS performance:

Index every column used in RLS policies
CREATE INDEX idx_orders_user_id ON orders (user_id);
CREATE INDEX idx_documents_user_id ON documents (user_id);
CREATE INDEX idx_comments_user_id ON comments (user_id);

Use auth.uid() and auth.jwt(). These Supabase functions are optimized and treated as constants by the planner:

Optimal RLS patterns
-- Good: auth.uid() is a constant for the planner
CREATE POLICY "own_data" ON orders
  FOR SELECT USING (user_id = auth.uid());

-- Good: JWT claim extraction
CREATE POLICY "org_data" ON orders
  FOR SELECT USING (org_id = (auth.jwt() ->> 'org_id')::uuid);

Avoid subqueries where possible. When unavoidable, use EXISTS instead of IN:

EXISTS is better than IN for RLS subqueries
-- Better than IN: EXISTS can short-circuit
CREATE POLICY "team_data" ON documents
  FOR SELECT USING (
    EXISTS (
      SELECT 1 FROM team_members
      WHERE team_members.team_id = documents.team_id
        AND team_members.user_id = auth.uid()
    )
  );

Test RLS performance with EXPLAIN ANALYZE:

Simulate authenticated RLS evaluation
-- Simulate authenticated user context
SET LOCAL role = 'authenticated';
SET LOCAL request.jwt.claims = '{"sub": "your-user-uuid-here"}';

-- Now EXPLAIN ANALYZE will include RLS policy evaluation
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending';

For detailed guidance on interpreting these plans, see the EXPLAIN ANALYZE guide.

Consider materialized access control for complex permission structures:

Materialized view for complex access control
-- Materialized view that precomputes who can access what
CREATE MATERIALIZED VIEW user_document_access AS
SELECT DISTINCT
  tm.user_id,
  d.id AS document_id
FROM team_members tm
JOIN team_permissions tp ON tp.team_id = tm.team_id
JOIN documents d ON d.org_id = tp.org_id
WHERE tp.permission = 'read';

CREATE UNIQUE INDEX idx_uda_user_doc
  ON user_document_access (user_id, document_id);

-- Simple, fast RLS policy against the materialized view
CREATE POLICY "materialized_access" ON documents
  FOR SELECT USING (
    EXISTS (
      SELECT 1 FROM user_document_access
      WHERE user_document_access.user_id = auth.uid()
        AND user_document_access.document_id = documents.id
    )
  );

-- Refresh when permissions change (via trigger or pg_cron)
REFRESH MATERIALIZED VIEW CONCURRENTLY user_document_access;

When RLS is the wrong tool

RLS is excellent for multi-tenant applications where the tenant boundary is clean — one column, one policy, one index. For complex permission hierarchies involving role inheritance, organizational trees, and cross-entity sharing rules, the practical approach is to enforce the simple tenant boundary with RLS and handle fine-grained permission logic in the application layer.

Realtime performance

How Supabase Realtime works

Supabase Realtime uses PostgreSQL logical replication to stream database changes to connected clients over WebSocket connections. When you subscribe to changes on a table, Supabase creates a replication slot and uses a WAL decoder to read changes and broadcast them.

This architecture is elegant — it uses PostgreSQL's built-in replication infrastructure rather than application-level triggers or polling. But it has performance implications that scale with write volume.

Optimizing Realtime subscriptions

Subscribe only to the tables you need. Each subscription adds WAL processing overhead. Subscribing to tables "just in case" is the database equivalent of leaving every light in the manor on — it costs more than you realize.

Use filters to reduce broadcast volume:

Filtered Realtime subscription
const channel = supabase
  .channel('room-messages')
  .on(
    'postgres_changes',
    {
      event: 'INSERT',
      schema: 'public',
      table: 'messages',
      filter: 'room_id=eq.123'
    },
    (payload) => {
      console.log('New message:', payload.new);
    }
  )
  .subscribe();

Avoid Realtime on high-write tables. Tables that receive continuous writes — analytics events, logs, sensor data — generate enormous WAL volume. For high-frequency data, consider batch writes and publish summaries to a lower-volume table.

Monitor replication lag:

Check Realtime replication lag
SELECT
  slot_name,
  active,
  pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn)) AS lag_size
FROM pg_replication_slots
WHERE slot_type = 'logical';

Growing lag_size indicates the Realtime decoder is not keeping up with write volume.

Extensions and index management

Available extensions

Several extensions are particularly relevant for performance optimization:

  • pg_stat_statements (enabled by default) — tracks execution statistics for all SQL statements. See the pg_stat_statements guide.
  • pg_trgm — trigram-based similarity search, useful for LIKE and ILIKE queries.
  • pgvector — vector similarity search. See the pgvector performance tuning guide.
  • pg_cron (Pro plan and above) — scheduled tasks for materialized view refreshes and periodic maintenance.
  • hypopg — test hypothetical indexes without creating them. See the HypoPG guide.

Index management on Supabase

Supabase does not create indexes automatically beyond primary keys and unique constraints. Index management is entirely the developer's responsibility — and one of the most common areas where Supabase applications leave performance on the table.

The gap is often invisible. Supabase's auto-generated REST API translates URL parameters into SQL queries. A request like GET /rest/v1/orders?user_id=eq.abc123&status=eq.pending becomes SELECT * FROM orders WHERE user_id = 'abc123' AND status = 'pending'. If neither column is indexed, this performs a sequential scan.

Create indexes for:

  • Columns used in RLS policies
  • Columns used in API filters (?column=eq.value parameters)
  • Columns used in ORDER BY clauses (especially with LIMIT)
  • Foreign key columns (PostgreSQL does not automatically index foreign keys)
Common indexes for Supabase applications
-- Common indexes for a typical Supabase application
CREATE INDEX CONCURRENTLY idx_orders_user_id ON orders (user_id);
CREATE INDEX CONCURRENTLY idx_orders_status ON orders (status);
CREATE INDEX CONCURRENTLY idx_orders_created_at ON orders (created_at DESC);
CREATE INDEX CONCURRENTLY idx_comments_post_id ON comments (post_id);

Use CREATE INDEX CONCURRENTLY to avoid locking the table during index creation. Use EXPLAIN ANALYZE in the SQL editor to verify indexes are being used.

For advanced indexing techniques, see the guides on partial indexes and composite indexes.

Edge Functions and database performance

Supabase Edge Functions run geographically close to users but potentially far from the database. This creates latency characteristics that differ from a co-located backend server.

Minimize database round-trips. The round-trip latency between an edge location and the database region is typically 20-100ms. One query returning 10 rows is significantly faster than 10 queries returning 1 row each.

Inefficient: multiple round-trips
// Inefficient: multiple round-trips from edge to database
const user = await supabase.from('users').select('*').eq('id', userId).single();
const orders = await supabase.from('orders').select('*').eq('user_id', userId);
const prefs = await supabase.from('preferences').select('*').eq('user_id', userId);
Better: single round-trip with join
// Better: single round-trip with join
const { data } = await supabase
  .from('users')
  .select(`
    *,
    orders (*),
    preferences (*)
  `)
  .eq('id', userId)
  .single();

For more on avoiding N+1 patterns, see the N+1 queries guide.

Batch writes where possible:

Batch inserts from Edge Functions
// Inefficient: one insert per item
for (const item of items) {
  await supabase.from('events').insert(item);
}

// Better: single bulk insert
await supabase.from('events').insert(items);

Supabase's constraints — I should be forthcoming

Compute is tied to your plan. The CPU, memory, and I/O performance available to your database is determined by your plan tier and any compute add-ons. There is no autoscaling of the database itself.

The Free tier is genuinely constrained. Shared compute, 500MB storage, 60 connections. Excellent for development but not designed for production workloads with meaningful traffic.

Some configuration parameters are not directly modifiable. Settings like shared_buffers, work_mem, and random_page_cost are not exposed for direct modification. For per-session tuning, you can use SET commands. For background on what these settings do, see the PostgreSQL configuration tuning guide.

PostgREST adds overhead compared to direct SQL. Every API call passes through HTTP parsing, JWT validation, RLS evaluation, and PostgREST query generation before reaching PostgreSQL.

These are the trade-offs of a managed platform. They are reasonable, well-documented, and for most applications, the developer experience that Supabase provides is genuinely worth the limitations. Supabase is well-built software, improving rapidly. Credit where it is due.

Frequently asked questions