PostgreSQL on Heroku: Making Every Connection Count
Twenty connections. Allow me to make them count.
Heroku Postgres is a fine host — you simply need to know the house rules
Good evening. I see you have arrived with a Heroku application and twenty connections. Allow me to make them count.
Heroku Postgres is one of the oldest managed PostgreSQL services. It has been running production databases since 2011, and its track record is defined by reliability, simplicity, and a deliberately limited configuration surface area. These are genuine virtues.
That simplicity is both a strength and a constraint. Fewer knobs to turn means fewer things to misconfigure — but also fewer ways to optimize when performance becomes a concern. Heroku's PostgreSQL plans are defined by fixed boundaries: connection limits, storage caps, row limits (on Essential plans), and compute tiers. Optimization on Heroku means working effectively within those boundaries, not lamenting them.
This guide is written for indie developers, small teams, side projects that grew into real products, and Rails applications that have been running on Heroku since the early days. You built real products for real users on constrained infrastructure. The optimization advice here takes those constraints seriously — because building well within constraints is a craft I have considerable respect for.
Connection limits — the constraint that shapes everything
Limits by plan
| Plan | Connections | Storage | Key Limitations |
|---|---|---|---|
| Essential-0 | 20 | 1 GB | 10,000 row limit |
| Essential-1 | 20 | 10 GB | 10 million row limit |
| Essential-2 | 40 | 64 GB | — |
| Standard-0 | 120 | 64 GB | pg_stat_statements, followers |
| Standard-2 | 400 | 256 GB | — |
| Premium-0 | 120 | 64 GB | HA, encryption at rest |
| Premium-2 | 400 | 256 GB | — |
On Essential plans, 20 connections is a hard ceiling — and every process in your Heroku application draws from this pool. Web dynos, worker dynos, one-off dynos (heroku run rails console), database migrations, and even the Heroku Data dashboard connection all count against the limit.
Allow me to illustrate with a typical small Rails application:
- 2 web dynos x 5 Puma threads x 1 connection each = 10 connections
- 1 Sidekiq process with 5 threads = 5 connections
heroku run rails console= 1 connection- Heroku Data dashboard = 1 connection
- Total: 17 of 20 connections
That is 85% of the limit, with no margin for a second console session, a one-off migration, or a traffic spike.
Managing connections on tight plans
Set the connection pool to match your thread count:
# config/database.yml
production:
url: <%= ENV['DATABASE_URL'] %>
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %> Setting pool: 20 on a process with 5 threads reserves 20 backend connections even though only 5 can be used concurrently. This wastes connections that other processes need.
Close idle connections aggressively:
-- Terminate connections idle in a transaction for more than 30 seconds
ALTER DATABASE your_database SET idle_in_transaction_session_timeout = '30s';
-- Terminate statements running longer than 60 seconds
ALTER DATABASE your_database SET statement_timeout = '60s'; On a 20-connection plan, one stuck transaction holding a connection represents 5% of total capacity. That is not a rounding error — that is a room in the house made unavailable to every other guest.
PgBouncer on Heroku
The PgBouncer buildpack
Heroku does not include a connection pooler by default. PgBouncer is available via the official buildpack, which runs PgBouncer as a sidecar process on each dyno. For background on how connection pooling works, see the connection pooling guide and the connection pooling glossary entry.
# Add the buildpack
heroku buildpacks:add heroku/pgbouncer --app your-app-name
# Enable PgBouncer
heroku config:set PGBOUNCER_ENABLED=true --app your-app-name After deployment, your application connects to PgBouncer on localhost, and PgBouncer manages the actual connections to the database. The default mode is transaction pooling.
For a detailed comparison of PgBouncer with other pooling options, see the PgBouncer comparison page.
When PgBouncer helps (and when it doesn't)
PgBouncer helps when:
- You have more application threads than your plan's connection limit allows
- Connection churn is high
- You need to scale web dynos without proportionally increasing database connections
PgBouncer does NOT help when:
- Your connections are mostly idle and the issue is connection count, not churn
- You rely on session-level PostgreSQL features: prepared statements, advisory locks,
LISTEN/NOTIFY,SETcommands, or temporary tables
The prepared statements trade-off: Rails uses prepared statements by default. PgBouncer in transaction mode breaks them. The fix:
# config/database.yml
production:
url: <%= ENV['DATABASE_URL'] %>
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
prepared_statements: false Prepared statements improve query parsing performance. Disabling them adds a small overhead per query. On Essential plans, the connection limit is usually the tighter constraint, so the trade-off favors PgBouncer.
Finding expensive queries — the investigation
heroku pg:diagnose
heroku pg:diagnose is the built-in diagnostic tool that every Heroku Postgres user should run regularly. It reports on long-running queries, unused indexes, index hit rate, table bloat, connection count, and sequence exhaustion risk.
Run pg:diagnose weekly for active projects, and immediately after any performance complaint.
pg_stat_statements on Heroku
pg_stat_statements is only available on Standard and Premium plans — not on Essential plans.
CREATE EXTENSION IF NOT EXISTS pg_stat_statements; SELECT
query,
calls,
mean_exec_time,
total_exec_time,
rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20; For a comprehensive guide, see the pg_stat_statements guide.
On Essential plans without pg_stat_statements, use these alternatives:
SELECT
relname AS table_name,
seq_scan,
idx_scan,
n_live_tup AS live_rows,
n_dead_tup AS dead_rows
FROM pg_stat_user_tables
ORDER BY seq_scan DESC
LIMIT 20; Tables with high seq_scan and low idx_scan are likely missing indexes.
Configure slow query logging:
heroku pg:settings:log-min-duration-statement 200 --app your-app-name The usual suspects
Missing indexes. The most common cause of slow queries on Heroku:
-- Check which tables have the most sequential scans
SELECT relname, seq_scan, idx_scan
FROM pg_stat_user_tables
WHERE seq_scan > 100
ORDER BY seq_scan DESC;
-- Create an index (use CONCURRENTLY to avoid locking)
CREATE INDEX CONCURRENTLY idx_orders_user_id ON orders (user_id); For detailed guidance, see the EXPLAIN ANALYZE guide.
N+1 queries. The most common performance problem in Rails applications on Heroku. For a comprehensive treatment, see the N+1 queries guide.
Unbounded queries. SELECT * FROM large_table without a LIMIT. Add LIMIT clauses and pagination.
Long-running transactions. Set reasonable timeouts:
-- 30 seconds for web requests
ALTER DATABASE your_database SET statement_timeout = '30s'; For background jobs that need longer execution times:
SET LOCAL statement_timeout = '300s'; -- 5 minutes for this transaction Follower databases — additional staff for the read-heavy work
Followers are asynchronous read replicas, available on Standard and Premium plans.
heroku addons:create heroku-postgresql:standard-0 \
--follow DATABASE_URL \
--app your-app-name Followers are useful for reporting and analytics queries, read-heavy API endpoints, and dashboards that scan large portions of the database.
Rails integration:
# config/database.yml with Makara
production:
adapter: postgresql_makara
makara:
sticky: true
connections:
- role: master
url: <%= ENV['DATABASE_URL'] %>
- role: slave
url: <%= ENV['DATABASE_FOLLOWER_URL'] %> An important detail: follower connections are separate from primary connections. A follower on Standard-0 gets its own 120-connection limit. This effectively doubles your available read capacity without adding pressure to the primary's connection pool.
Maintenance windows and credential rotation
Maintenance windows
Heroku schedules maintenance for security patches and minor PostgreSQL version updates. Set your maintenance window to your lowest-traffic period:
heroku pg:maintenance:window DATABASE_URL "Tuesday 06:00" --app your-app-name During maintenance, expect a brief downtime — typically under 60 seconds for Essential plans. Implement retry logic for database connections and use exponential backoff for failed queries.
Do not postpone maintenance indefinitely. Accumulated deferred maintenance eventually forces a longer window.
Credential rotation
Heroku rotates database credentials periodically. Applications must read DATABASE_URL at startup, not from a config file. External services connecting to the database must also handle credential changes. Old credentials stop working immediately after rotation.
Getting the most from Essential plans
Essential plans are genuinely useful for real applications. Many successful products run on them, and the constraints they impose are workable with deliberate optimization.
Index aggressively. On a constrained plan, the cost of a sequential scan is amplified because you have fewer compute resources to absorb it.
Eliminate N+1 patterns. Each N+1 query consumes a connection for each round-trip.
Select only needed columns:
# Instead of
User.where(active: true)
# Use
User.where(active: true).select(:id, :name, :email) Cache at the application layer:
# Rails fragment caching
<% cache(@user) do %>
<%= render @user.recent_orders %>
<% end %> When to upgrade: The right time to move from Essential to Standard is when you consistently hit connection limits despite optimization, when row limits constrain your data model, or when you need pg_stat_statements for performance visibility. For a framework on making this decision, see the PostgreSQL scaling decision framework.
When to leave Heroku — and I should be honest about this
Heroku's PostgreSQL pricing is not competitive with AWS RDS, Supabase, or Neon for equivalent compute and storage resources. The premium is the cost of operational simplicity — no VPCs to configure, no parameter groups to manage, no replication to set up manually. That simplicity has real value, especially for small teams.
Configuration options are limited. There are no custom parameter groups, no direct shared_buffers tuning, no random_page_cost adjustment. For applications with specific performance characteristics that would benefit from configuration tuning (see the PostgreSQL configuration tuning guide), this is a real limitation.
If your application has outgrown Heroku's constraints and your team has the operational capacity to manage more infrastructure, migrating to RDS or a dedicated PostgreSQL provider is a reasonable next step. For teams considering RDS, see the AWS RDS PostgreSQL optimization guide.
But do not migrate prematurely. Heroku's simplicity is a genuine asset, and many successful applications run on Heroku Postgres for years without outgrowing it. Heroku is not a stepping stone to be outgrown as quickly as possible. It is a valid long-term choice for teams that value operational simplicity over maximum configurability. And the developers who have built successful products within its constraints have earned, if I may say so, a good deal of respect.