← Rails & Ruby Frameworks

ActiveRecord Without Rails: Managing PostgreSQL Connections in Sinatra, Grape, and Standalone Ruby

Rails configures 14 connection pool settings you never think about. Without Rails, you inherit the defaults for exactly none of them.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 28 min read
The connection pool was configured for Rails. It has just learned it is running in Sinatra. Adjustments are underway.

Good evening. I understand you are running ActiveRecord without a chaperone.

A bold choice. Not a wrong one, necessarily, but one that requires understanding what Rails was doing for you behind the curtain. And Rails was doing rather a lot.

ActiveRecord is a perfectly capable ORM on its own. It predates Rails, philosophically if not chronologically, and it functions without Rails just fine. The connection pool, the query interface, the migration system — all of it works independently. Teams running Sinatra, Grape, or plain Ruby scripts reach for ActiveRecord because it is familiar, well-documented, and already knows PostgreSQL intimately.

The difficulty is not getting ActiveRecord to work outside Rails. The difficulty is getting the connection pool to behave. Rails wires up connection lifecycle management — pool creation, reaping, cleanup after each request, reconnection after forks — through a dozen initializers and middleware you never see. Without Rails, those responsibilities fall to you. Most teams discover this when their Sinatra app works perfectly in development, handles 10 concurrent requests in staging, and drops every 11th request in production with ActiveRecord::ConnectionTimeoutError.

I have watched this pattern unfold across enough production incidents to map every trap. This article covers every connection pool configuration option, the specific gotchas that bite non-Rails applications, the prepared statement problem, fork safety, migration management, debugging techniques, and production-ready setup patterns for Sinatra, Grape, Puma, and standalone scripts.

It is, if you will permit me, rather thorough. The subject warrants it.

The establish_connection trap: it does not actually connect

This is the single most important thing to understand about ActiveRecord outside Rails, and almost nobody knows it.

establish_connection is lazy
# The most common setup — and the most commonly misconfigured
require 'active_record'

ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     ENV.fetch('DATABASE_HOST', 'localhost'),
  port:     ENV.fetch('DATABASE_PORT', 5432),
  database: ENV.fetch('DATABASE_NAME', 'myapp'),
  username: ENV.fetch('DATABASE_USER', 'myapp'),
  password: ENV.fetch('DATABASE_PASSWORD'),
  pool:     ENV.fetch('DATABASE_POOL', 10).to_i,
  checkout_timeout:  5,     # seconds to wait for a connection from the pool
  reaping_frequency: 10,    # seconds between reaper runs (cleans dead connections)
  idle_timeout:      300,   # seconds before idle connections are closed
)

# Here's the gotcha: this call succeeds even if PostgreSQL isn't running.
# establish_connection is lazy. It registers the configuration.
# It does NOT open a TCP socket.
# It does NOT authenticate.
# It does NOT verify the database exists.
#
# Your app will boot fine. The first actual query will fail.
# In production, this means your health check passes, your load balancer
# adds the instance, and then every request gets:
#   PG::ConnectionBad: could not connect to server

establish_connection registers your database configuration. It creates a ConnectionPool object. It does not open a socket, perform a TCP handshake, authenticate with PostgreSQL, or verify that the database exists. The name is misleading. It should be called register_connection_config.

In Rails, this is fine. The boot process calls establish_connection, then immediately runs migrations, seeds, or an initializer that touches the database. If PostgreSQL is unreachable, Rails fails during boot. You see the error. You fix it.

Outside Rails, your application boots, your health check endpoint returns 200 (because the app process is running), your load balancer adds the instance to the pool, and the first real request fails with PG::ConnectionBad. Depending on your retry logic, this either self-heals in a few seconds or cascades into a full outage as the load balancer routes traffic to a server that cannot serve it.

I find this behaviour — calling a method establish_connection when it establishes nothing — to be the sort of naming choice that keeps infrastructure teams employed. The connection is not established. It is pencilled in. Tentatively. With the understanding that someone else will follow through later.

The fix is one line of code. Force a real connection at boot time.

Verify the connection at boot
# Force an actual connection at boot time
ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     'localhost',
  database: 'myapp',
  username: 'myapp',
  password: ENV['DATABASE_PASSWORD'],
  pool:     10,
)

# Verify the connection is real — this actually connects
begin
  ActiveRecord::Base.connection.execute('SELECT 1')
  puts "Database connection verified"
rescue PG::ConnectionBad => e
  abort "Cannot connect to PostgreSQL: #{e.message}"
end

ActiveRecord::Base.connection checks out a connection from the pool, which forces the pool to actually create one. The SELECT 1 verifies the round trip. If PostgreSQL is down, you get the error at boot, not at request time. This is a small thing that prevents large problems.

One additional subtlety: the connection checked out by ActiveRecord::Base.connection during verification remains checked out on the main thread. In a web server context this is irrelevant — Puma will fork workers or spawn threads, each getting their own checkout. But in a standalone script, that verified connection is now your working connection. This is fine. Just be aware it happened.

Every pool configuration option, explained

ActiveRecord's connection pool accepts eight configuration options. Rails sets sensible defaults for all of them. Without Rails, you are working with ActiveRecord's internal defaults, which are occasionally different from what Rails would have set. Here is the complete reference.

OptionDefaultRails defaultWhat it does
pool55 (from database.yml)Maximum number of connections in the pool
checkout_timeout55Seconds to wait for a connection before raising ConnectionTimeoutError
reaping_frequency6060 (set by Rails)Seconds between reaper thread runs. The reaper closes dead connections. Set to nil to disable.
idle_timeout300300 (set by Rails)Seconds before an idle connection is eligible for reaping. 0 means never reap idle connections.
prepared_statementstruetrueEnable PostgreSQL prepared statements. Disable if using PgBouncer in transaction mode.
advisory_lockstruetrueEnable advisory locks for migrations. Usually irrelevant outside Rails.
connect_timeoutnil (OS default)nilTCP connection timeout in seconds. Not the same as checkout_timeout.
variables{}{ statement_timeout: ... }Hash of PostgreSQL session variables to SET on each new connection.

Three of these deserve particular attention outside Rails: pool, reaping_frequency, and idle_timeout.

The pool size must match or exceed your concurrency model. If you run Puma with 5 threads per worker, the pool must be at least 5. If the pool is 3 and you have 5 threads, 2 threads will block waiting for a connection on every request. Under sustained load, those 2 threads hit checkout_timeout and raise exceptions. This is the most common production failure in non-Rails ActiveRecord applications.

The reaping_frequency controls how often ActiveRecord checks for and removes dead connections — connections that were checked out by a thread that no longer exists. Without the reaper, dead connections accumulate in the pool, reducing the effective pool size until checkout_timeout errors appear. Rails starts the reaper automatically. Without Rails, you must set reaping_frequency explicitly.

The idle_timeout determines when unused connections are closed. In a long-running daemon that handles bursts of traffic, idle connections between bursts consume PostgreSQL backend slots for no reason. A 300-second idle timeout (5 minutes) is reasonable for most applications. Set to 0 to keep idle connections forever, which is appropriate only for applications with constant, predictable load.

The fourth option that deserves mention — and which I saved for its own section — is prepared_statements. It has a way of causing trouble that is uniquely painful outside Rails.

The reaper: what it does and why it matters without Rails

The reaper is a background thread that periodically walks the connection pool, looking for two things: dead connections (checked out by threads that have since terminated) and idle connections (checked in but unused for longer than idle_timeout). It closes both.

Without the reaper, your pool degrades over time. Here is a typical failure sequence.

1. A thread checks out a connection and begins a query.

2. The thread is killed (timeout, exception, crash).

3. The connection is still marked as "checked out" in the pool.

4. No one returns it. The pool thinks it is in use.

5. Repeat 5 times with a pool of 10, and effective pool size is 5.

6. Repeat 10 times, and no connections are available. Every request times out.

This failure mode is insidious because it is gradual. Your application does not crash. It slows down. Requests that normally take 50ms start taking 5 seconds as threads queue behind checkout_timeout. Then they start failing. The error rate climbs over hours or days, not minutes. By the time anyone investigates, the connection between "a thread died last Tuesday" and "the pool is exhausted today" is not obvious.

Checking and configuring the reaper
# The reaper problem outside Rails
#
# Rails sets reaping_frequency and idle_timeout automatically.
# Without Rails, the defaults are technically present in ActiveRecord,
# but the reaper thread only starts if reaping_frequency is truthy
# AND the pool is created in the right lifecycle context.
#
# If the reaper isn't running, dead connections accumulate:

# Simulate the problem:
ActiveRecord::Base.establish_connection(
  adapter: 'postgresql',
  host: 'localhost',
  database: 'myapp',
  username: 'myapp',
  password: 'secret',
  pool: 10,
  # reaping_frequency not set — reaper may or may not start
)

# Check if the reaper is actually running:
pool = ActiveRecord::Base.connection_pool
puts pool.reaper           # => #<ActiveRecord::ConnectionAdapters::ConnectionPool::Reaper>
puts pool.reaper.frequency # => nil means the reaper is NOT running

# Explicit configuration:
ActiveRecord::Base.establish_connection(
  adapter: 'postgresql',
  host: 'localhost',
  database: 'myapp',
  username: 'myapp',
  password: 'secret',
  pool: 10,
  reaping_frequency: 10,   # check every 10 seconds
  idle_timeout: 300,        # close connections idle for 5+ minutes
)

# Now verify:
pool = ActiveRecord::Base.connection_pool
puts pool.reaper.frequency # => 10 — reaper is running

The reaper frequency is a balance. Too frequent (every 1 second) and you add unnecessary overhead — the reaper acquires a mutex on the pool while it walks the connections. Too infrequent (every 300 seconds) and dead connections linger for five minutes before being reclaimed. 10 seconds is a sensible default for web applications. For background jobs that run every few minutes, 30-60 seconds is fine.

I should also note that the reaper only reclaims connections from threads that have terminated. If a thread is alive but has finished its database work, the reaper will not touch its connection. This is why with_connection blocks matter — they return the connection when the block ends, regardless of whether the thread continues. The reaper is a safety net, not a substitute for proper connection hygiene.

Prepared statements: the trap nobody warns you about

ActiveRecord enables prepared statements by default. This is an excellent performance optimization — PostgreSQL parses and plans the query once, then executes the cached plan on subsequent calls with different parameters. For a query that runs thousands of times per hour, the savings in parse and plan time are meaningful.

Outside Rails, prepared statements become a source of three distinct failure modes that are confusing to diagnose if you are not expecting them.

Prepared statement pitfalls
# The prepared statements problem outside Rails
#
# ActiveRecord uses prepared statements by default.
# Each connection maintains its own set of prepared statements.
# This is fine when connections are stable and long-lived.
# It becomes a problem in three scenarios:

# 1. PgBouncer in transaction mode
#    PgBouncer reassigns backend connections between transactions.
#    Your app prepares "a1" on connection A, then gets connection B
#    for the next transaction. Connection B has never seen "a1".
#    Result: PG::InvalidSqlStatementName: prepared statement "a1" does not exist

# 2. PostgreSQL restarts
#    All prepared statements are lost. ActiveRecord doesn't know.
#    The next query using a cached prepared statement fails.
#    Rails handles this with automatic reconnection. Without Rails,
#    you get: PG::InvalidSqlStatementName until the connection is replaced.

# 3. Connection recycling via idle_timeout
#    The reaper closes an idle connection. ActiveRecord opens a new one.
#    The new connection has no prepared statements.
#    ActiveRecord's statement cache thinks they still exist.

# The fix for PgBouncer:
ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     'localhost',
  database: 'myapp',
  username: 'myapp',
  password: ENV['DATABASE_PASSWORD'],
  pool:     10,
  prepared_statements: false,   # disable prepared statements entirely
)

# The fix for reconnection issues (without disabling prepared statements):
# ActiveRecord 7.1+ handles this automatically with connection verification.
# On older versions, wrap connection use in retry logic:
def with_retry
  retries = 0
  begin
    yield
  rescue PG::InvalidSqlStatementName, ActiveRecord::StatementInvalid => e
    raise unless e.message.include?('prepared statement')
    raise if retries >= 1
    retries += 1
    ActiveRecord::Base.connection.reconnect!
    retry
  end
end

"Connection pooling is not optional infrastructure for applications that grow. It is the difference between a household that manages its resources and one that leaves every door open, every light burning, every tap running."

— from You Don't Need Redis, Chapter 17: Sorting Out the Connection Poolers

The first and most common issue arises with connection poolers. PgBouncer in transaction mode — the most common configuration — reassigns backend PostgreSQL connections between transactions. Your application prepares a statement on connection A, then gets connection B for the next transaction. Connection B has never seen that prepared statement. PostgreSQL returns PG::InvalidSqlStatementName. The error appears intermittent because it only occurs when the pooler assigns a different backend connection, which depends on concurrency and timing.

The second issue occurs after a PostgreSQL restart. All prepared statements live in server memory and are lost when the server restarts. ActiveRecord's internal statement cache does not know this. It sends EXECUTE for a statement that no longer exists. Rails handles this with reconnection middleware that resets the statement cache. Without Rails, you need to handle it yourself.

The third issue — and the most subtle — is that ActiveRecord assigns prepared statement names sequentially (a1, a2, a3...) per connection. If a connection is closed and a new one is opened, the new connection's statement counter may conflict with what the cache expects. This manifests as queries returning wrong results or, more commonly, as PG::DuplicatePstatement errors.

If you are running any connection pooler between your application and PostgreSQL — PgBouncer, pgcat, Gold Lapel, or anything else — set prepared_statements: false. The performance penalty is small (typically 1-3% on query-heavy workloads) and the reliability improvement is significant. If you are connecting directly to PostgreSQL with no intermediary, prepared statements are fine and worth keeping.

Sinatra setup: the complete pattern

Sinatra's simplicity is its virtue and its liability. There is no middleware stack managing database connections. There is no after_action callback clearing checked-out connections. There is no config/database.yml. You build the wiring yourself.

Complete Sinatra + ActiveRecord setup
# app.rb — Sinatra with ActiveRecord, done properly
require 'sinatra'
require 'active_record'
require 'pg'

# Configure ActiveRecord before the first request, not inside a route
ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     ENV.fetch('DATABASE_HOST', 'localhost'),
  database: ENV.fetch('DATABASE_NAME', 'myapp'),
  username: ENV.fetch('DATABASE_USER', 'myapp'),
  password: ENV.fetch('DATABASE_PASSWORD'),
  pool:     ENV.fetch('DATABASE_POOL', 5).to_i,
  reaping_frequency: 10,
  idle_timeout:      300,
)

# Verify at boot
ActiveRecord::Base.connection.execute('SELECT 1')

# Clear connections after every request — this is critical
after do
  ActiveRecord::Base.clear_active_connections!
end

# Define your models
class User < ActiveRecord::Base
  has_many :orders
end

class Order < ActiveRecord::Base
  belongs_to :user
end

get '/users/:id' do
  content_type :json
  user = User.find(params[:id])
  user.to_json(include: :orders)
end

get '/health' do
  ActiveRecord::Base.connection.execute('SELECT 1')
  { status: 'ok' }.to_json
rescue => e
  status 503
  { status: 'error', message: e.message }.to_json
end

The critical line is the after block calling clear_active_connections!. Without it, every Sinatra request checks out a connection from the pool and never returns it. In a threaded server like Puma, this exhausts the pool within seconds. In a forking server like Unicorn, each worker holds a connection permanently — less disastrous, but still wasteful.

A few details that are easy to overlook in the example above. The require 'pg' on line 3 is not optional — ActiveRecord's PostgreSQL adapter delegates to the pg gem for the wire protocol, and it will raise LoadError at connection time if pg is not available. The health check endpoint performs a real database query rather than returning a static 200, which means your load balancer only routes traffic to instances with a live database connection. And the models are defined after the connection is established, which is not strictly necessary but avoids confusing load-order issues in larger applications.

The sinatra-activerecord gem automates some of this wiring, including the connection cleanup middleware. It is a reasonable choice if you want convention over configuration. But the gem adds Rake tasks, migration support, and database.yml parsing — machinery you may not need if you are running ActiveRecord purely for queries in a microservice. I have seen teams pull in sinatra-activerecord for the connection management, then spend an afternoon configuring the migration paths they will never use. Know what you are buying before you buy it.

Grape setup: middleware for connection cleanup

Grape, being Rack-based, lends itself to a middleware approach for connection management. The pattern is similar to Sinatra but expressed as a Rack middleware class rather than an after hook.

Grape API with connection management middleware
# config.ru — Grape API with ActiveRecord
require 'grape'
require 'active_record'

ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     ENV.fetch('DATABASE_HOST', 'localhost'),
  database: ENV.fetch('DATABASE_NAME', 'myapp'),
  username: ENV.fetch('DATABASE_USER', 'myapp'),
  password: ENV.fetch('DATABASE_PASSWORD'),
  pool:     ENV.fetch('DATABASE_POOL', 5).to_i,
  reaping_frequency: 10,
  idle_timeout:      300,
)

ActiveRecord::Base.connection.execute('SELECT 1')

# Middleware to return connections after each request
class ConnectionManagement
  def initialize(app)
    @app = app
  end

  def call(env)
    response = @app.call(env)
    response
  ensure
    ActiveRecord::Base.clear_active_connections!
  end
end

class UsersAPI < Grape::API
  format :json

  resource :users do
    get ':id' do
      User.find(params[:id])
    end

    get ':id/orders' do
      User.find(params[:id]).orders
    end
  end
end

use ConnectionManagement
run UsersAPI

The ensure clause in the middleware is doing the real work. Whether the request succeeds, raises an exception, or is interrupted, the connection is returned to the pool. This is strictly better than an after hook, which might not fire on certain error conditions — a Sinatra after block does not execute if the request is interrupted by a timeout, for instance. The ensure clause runs regardless.

The middleware approach has another advantage: it is composable. If you later add a second Grape API class, or mount a different Rack application alongside your Grape API, the same middleware handles connection cleanup for all of them. The after hook in Sinatra is scoped to the Sinatra application. Rack middleware applies to everything in the stack.

If your Grape API is mounted inside a Rails application, none of this is necessary — Rails handles it. This pattern is specifically for standalone Grape APIs running on bare Rack.

Standalone scripts and daemons: the forgotten case

Web applications get the attention. Sinatra guides exist. Grape examples circulate. But standalone Ruby scripts — background workers, CLI tools, data migration scripts, cron jobs — are where connection mismanagement causes the most damage, because there is no request-response cycle to provide a natural cleanup boundary.

Standalone script and daemon patterns
# worker.rb — standalone Ruby script (background job, CLI tool, daemon)
require 'active_record'

ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     ENV.fetch('DATABASE_HOST', 'localhost'),
  database: ENV.fetch('DATABASE_NAME', 'myapp'),
  username: ENV.fetch('DATABASE_USER', 'myapp'),
  password: ENV.fetch('DATABASE_PASSWORD'),
  pool:     1,               # single-threaded script: 1 is sufficient
  reaping_frequency: nil,    # disable reaper for short-lived scripts
)

class Order < ActiveRecord::Base; end

# For short-lived scripts, a direct connection is fine
orders = Order.where(status: 'pending').where('created_at < ?', 1.day.ago)
orders.each do |order|
  order.update!(status: 'expired')
end

# For long-running daemons, use with_connection blocks
# This checks out a connection, uses it, and returns it
loop do
  ActiveRecord::Base.connection_pool.with_connection do
    pending = Order.where(status: 'pending').count
    puts "Pending orders: #{pending}"
  end
  # Connection is back in the pool here — not held during sleep
  sleep 60
end

The distinction between short-lived scripts and long-running daemons is critical. A script that runs for 30 seconds, processes some records, and exits can treat the connection carelessly. The operating system closes the socket when the process terminates. The pool does not matter because there is only one thread and it is finished.

A daemon that runs for weeks is a different situation entirely. Without explicit connection management, the daemon holds a connection permanently — even during the 59 seconds of every minute that it is sleeping. If the daemon spawns threads for parallel processing, each thread claims a connection and may never return it. If PostgreSQL restarts during one of those 59-second sleeps, the daemon's connection becomes stale, and the next query fails with PG::ConnectionBad.

The with_connection pattern solves all of these issues. The connection is checked out when needed and returned when the block ends. During sleep, no connection is held. If PostgreSQL restarts, the next with_connection call creates a fresh connection from the pool. The daemon is resilient by default.

For Sidekiq users: Sidekiq's server middleware handles connection checkout and return automatically. For Resque, delayed_job, and custom threading: you are responsible. Treat every database interaction outside a web request as requiring explicit connection management. The inconvenience of typing with_connection is vanishingly small compared to the inconvenience of debugging a connection leak at 3am.

with_connection blocks: the antidote to connection leaks

I have mentioned with_connection several times already, but the pattern deserves its own examination, because it is the single most important tool for connection safety outside Rails.

Preventing connection leaks in threads and background jobs
# The with_connection pattern — preventing connection leaks

# BAD: connection checked out implicitly, never returned
Thread.new do
  users = User.where(active: true).count
  # Connection stays checked out for the lifetime of this thread.
  # If you spawn 20 threads, you exhaust a pool of 20.
  # The 21st thread hangs until checkout_timeout, then raises:
  #   ActiveRecord::ConnectionTimeoutError
  sleep 3600
end

# GOOD: explicit checkout and return
Thread.new do
  ActiveRecord::Base.connection_pool.with_connection do
    users = User.where(active: true).count
  end
  # Connection returned to the pool immediately.
  # The sleep happens without holding a database connection.
  sleep 3600
end

# ALSO GOOD: manual clear after work is done
Thread.new do
  users = User.where(active: true).count
  ActiveRecord::Base.clear_active_connections!
  sleep 3600
end

The rule is simple: if your code runs outside a web request — in a thread, a background job, a daemon loop, a CLI tool — wrap every database interaction in ActiveRecord::Base.connection_pool.with_connection. This checks out a connection for the duration of the block and returns it when the block completes, regardless of exceptions.

The alternative, clear_active_connections!, works but requires you to remember to call it. With with_connection, the cleanup is structural. You cannot forget it any more than you can forget a closing end.

There is a subtlety here that trips up experienced Rubyists. The "bad" example in the code above does not look bad. User.where(active: true).count is a perfectly normal ActiveRecord call. The problem is invisible: the call implicitly checks out a connection, and without an explicit return mechanism, the connection stays checked out for the lifetime of the thread. The implicit checkout is the root of nearly every connection leak I have investigated. Nobody writes "check out a connection and hold it forever." They write a query. The forever part happens by itself.

Fork safety: the silent connection killer

Process forking — used by Unicorn, Resque, and the Spring preloader, among others — creates a problem that ActiveRecord cannot solve on its own. When a process forks, the child inherits the parent's file descriptors, including the TCP socket to PostgreSQL. Two processes now share one socket.

Handling fork safety
# Fork safety — the silent connection killer
#
# When a process forks (Unicorn, Resque, spring, etc.),
# the child inherits the parent's file descriptors —
# including the TCP socket to PostgreSQL.
# Two processes now share one socket. Chaos follows.

# Unicorn: reconnect after fork
after_fork do |server, worker|
  ActiveRecord::Base.establish_connection
end

# Resque: reconnect before each job
Resque.before_fork = Proc.new do
  ActiveRecord::Base.connection.disconnect!
end

Resque.after_fork = Proc.new do
  ActiveRecord::Base.establish_connection
end

# Generic: detect and handle fork in any Ruby process
if defined?(Process::RLIMIT_NOFILE)
  parent_pid = Process.pid

  at_exit do
    if Process.pid != parent_pid
      # We are in a forked child — close the inherited connection
      ActiveRecord::Base.connection_handler.clear_all_connections!
    end
  end
end

# The symptom of a shared socket after fork:
#   PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly
#   PG::UnableToSend: no connection to the server
#
# These errors appear randomly — sometimes the parent's query
# arrives on the child's socket, or vice versa. The connection
# is not "lost." It is confused about who is talking to whom.

The symptoms of a shared socket after fork are bewildering if you have not seen them before. Queries return results from someone else's query. Transactions interleave. The connection randomly drops with PG::ConnectionBad: server closed the connection unexpectedly — except the server did not close anything. The client's packets are arriving on the wrong process's socket.

Rails handles this with built-in fork hooks in its Puma and test suite integrations. Unicorn users know to add after_fork blocks. But outside Rails, the responsibility is yours. Every forking mechanism — Unicorn, Resque, Parallel gem, custom Process.fork calls — needs a reconnection hook.

The safest pattern is defensive: disconnect before fork, reconnect after fork. The parent closes its connection (or the pool entirely) before forking. The child establishes a fresh connection after fork. No shared sockets. No confused connections. No 3am pages.

Pool sizing for Puma, Unicorn, and threaded servers

The pool size formula depends entirely on your concurrency model. Get it wrong and you either waste PostgreSQL connections or starve your request threads.

Puma configuration with pool sizing
# Puma config with ActiveRecord pool sizing
# config/puma.rb (or puma.config.rb)

workers 2
threads 5, 5

on_worker_boot do
  ActiveRecord::Base.establish_connection(
    adapter:  'postgresql',
    host:     ENV.fetch('DATABASE_HOST', 'localhost'),
    database: ENV.fetch('DATABASE_NAME', 'myapp'),
    username: ENV.fetch('DATABASE_USER', 'myapp'),
    password: ENV.fetch('DATABASE_PASSWORD'),
    pool:     5,                  # match thread count
    reaping_frequency: 10,
    idle_timeout:      300,
  )
  ActiveRecord::Base.connection.execute('SELECT 1')
end

after_worker_fork do
  ActiveRecord::Base.establish_connection
end

# Pool sizing math:
#   2 workers x 5 threads = 10 concurrent requests
#   Each worker needs pool >= thread count
#   pool: 5 per worker = 10 total PostgreSQL connections
#
# If pool < threads:
#   Under load, threads wait for connections -> checkout_timeout -> 500 errors
# If pool >> threads:
#   Wasted connections sitting idle, consuming PostgreSQL slots

Puma (threaded): pool size must equal or exceed the thread count per worker. If threads 5, 5, set pool: 5. Total PostgreSQL connections = workers x pool = 2 x 5 = 10. If threads 5, 16 (variable), set pool: 16 (the max).

Unicorn (forking): each worker is a separate process with its own pool. With 4 workers and pool: 5, maximum connections = 20. But Unicorn workers are single-threaded, so each worker uses at most 1 connection at a time. Set pool: 2 (one active, one for the reaper) and save 12 PostgreSQL slots.

Falcon (fiber-based): fibers share a single thread, so pool size should match the number of concurrent fibers that might access the database simultaneously. In practice, 5-10 is usually sufficient. Falcon's cooperative scheduling means fibers yield at I/O boundaries, so the actual concurrent database access is lower than the total fiber count.

The universal constraint: total connections across all workers, background jobs, and console sessions must not exceed PostgreSQL's max_connections (default: 100). Four Puma workers with pool: 5 is 20 connections. Add Sidekiq with 25 threads and pool: 25. Add a Rails console. Add a cron job. You are at 50+ connections from a single server. Scale to 3 servers and you are at 150 — already over the default limit. This arithmetic catches teams by surprise roughly once per quarter.

The arithmetic is worth writing down. Literally. A comment in your Puma configuration — or your deployment manifest, or your runbook — that says "3 servers x 2 workers x 5 threads x pool 5 = 150 connections, plus Sidekiq 25, total 175, max_connections is 200" is the kind of documentation that prevents a 2am incident six months from now when someone adds a fourth server.

Migrations without Rails: they work, with a few caveats

ActiveRecord's migration system is surprisingly self-contained. The syntax is the same, the DSL is the same, and the schema tracking via schema_migrations table works identically. What you lose is the Rake task ecosystem that Rails provides out of the box.

Running migrations without Rails
# Running ActiveRecord migrations without Rails
#
# ActiveRecord's migration system works independently of Rails.
# You need a Rakefile and a migration directory.

# Rakefile
require 'active_record'
require 'sinatra/activerecord/rake' # if using sinatra-activerecord gem

# Or, build it yourself:
require 'active_record'

ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     ENV.fetch('DATABASE_HOST', 'localhost'),
  database: ENV.fetch('DATABASE_NAME', 'myapp'),
  username: ENV.fetch('DATABASE_USER', 'myapp'),
  password: ENV.fetch('DATABASE_PASSWORD'),
)

namespace :db do
  desc "Run migrations"
  task :migrate do
    ActiveRecord::MigrationContext.new('db/migrate').migrate
    puts "Migrations complete. Current version: #{ActiveRecord::Migrator.current_version}"
  end

  desc "Rollback last migration"
  task :rollback do
    ActiveRecord::MigrationContext.new('db/migrate').rollback
    puts "Rolled back. Current version: #{ActiveRecord::Migrator.current_version}"
  end

  desc "Create the database"
  task :create do
    config = ActiveRecord::Base.connection_config
    ActiveRecord::Base.establish_connection(config.merge(database: 'postgres'))
    ActiveRecord::Base.connection.create_database(config[:database])
    puts "Created database: #{config[:database]}"
  end

  desc "Drop the database"
  task :drop do
    config = ActiveRecord::Base.connection_config
    ActiveRecord::Base.establish_connection(config.merge(database: 'postgres'))
    ActiveRecord::Base.connection.drop_database(config[:database])
    puts "Dropped database: #{config[:database]}"
  end
end

# db/migrate/20260305000001_create_users.rb
class CreateUsers < ActiveRecord::Migration[7.1]
  def change
    create_table :users do |t|
      t.string :email, null: false
      t.string :name
      t.timestamps
    end

    add_index :users, :email, unique: true
  end
end

# Then run: rake db:migrate

The migration files themselves are standard ActiveRecord migrations. Nothing changes about how you write them. The difference is in how you invoke them — instead of rails db:migrate, you call ActiveRecord::MigrationContext directly or build Rake tasks that do so.

A few caveats deserve mention. First, ActiveRecord::MigrationContext.new accepts a path to your migration directory. If that path is relative, it is relative to the working directory of the process, not the directory of the file containing the call. Use an absolute path or File.expand_path to avoid surprises. Second, the schema_migrations table is created automatically the first time you run a migration — you do not need to create it yourself. Third, ActiveRecord::Base.connection_config was deprecated in ActiveRecord 7.0; use ActiveRecord::Base.connection_db_config.configuration_hash instead.

For teams that want the full Rails migration experience without Rails, the sinatra-activerecord gem provides rake db:migrate, rake db:rollback, rake db:seed, and rake db:schema:dump. It is the closest thing to a drop-in replacement for the Rails migration Rake tasks, and it is well-maintained. If you are running more than a handful of migrations, the gem is worth the dependency.

Debugging connection pool issues

When something goes wrong with the connection pool — and eventually, something will — you need visibility into what the pool is doing. Rails provides this through its logging and instrumentation. Without Rails, you build your own.

Connection pool diagnostics
# Diagnosing connection pool issues without Rails
#
# Rails provides detailed logging of connection pool activity.
# Without Rails, you build your own instrumentation.

# 1. Pool statistics — call this from a health check or monitoring endpoint
def pool_stats
  pool = ActiveRecord::Base.connection_pool
  {
    size:        pool.size,             # configured pool size
    connections: pool.connections.size,  # total connections (checked out + idle)
    active:      pool.connections.count(&:in_use?),  # currently checked out
    idle:        pool.connections.count { |c| !c.in_use? },
    waiting:     pool.num_waiting_in_queue,  # threads waiting for a connection
    reaper:      pool.reaper.frequency,      # nil means reaper is not running
  }
end

# 2. PostgreSQL-side view — who is actually connected?
ActiveRecord::Base.connection.execute(<<~SQL).to_a
  SELECT pid, usename, application_name, client_addr, state,
         query_start, state_change,
         now() - state_change AS idle_duration
  FROM pg_stat_activity
  WHERE datname = current_database()
    AND pid != pg_backend_pid()
  ORDER BY state_change DESC
SQL

# 3. Log every checkout and checkin (development/debugging only)
ActiveSupport::Notifications.subscribe('!connection.active_record') do |*args|
  event = ActiveSupport::Notifications::Event.new(*args)
  puts "[ConnectionPool] #{event.name}: #{event.duration.round(1)}ms"
end

# 4. Detect leaks — periodic check for connections held too long
Thread.new do
  loop do
    pool = ActiveRecord::Base.connection_pool
    pool.connections.each do |conn|
      if conn.in_use? && conn.owner && !conn.owner.alive?
        puts "[LEAK] Connection #{conn.object_id} held by dead thread #{conn.owner}"
      end
    end
    sleep 30
  end
end

The pool_stats method above is the single most useful diagnostic tool for connection pool issues. Expose it through a health check endpoint, a monitoring hook, or a REPL in your running process. The key signals are:

waiting > 0 means threads are blocked waiting for a connection. If this number is consistently above zero, your pool is too small for your concurrency level. Increase pool or reduce concurrency.

active == size means the pool is fully utilized. Every connection is checked out. The next checkout attempt will wait. If this persists, you either have a connection leak (threads holding connections they are not using) or genuinely need more connections.

reaper == nil means the reaper thread is not running. Dead connections will accumulate. Set reaping_frequency explicitly.

connections > active means you have idle connections sitting in the pool. This is normal and expected — the pool creates connections up to the limit and keeps them available. But if connections equals size and active is 1, you are holding 9 idle connections to PostgreSQL. Whether this matters depends on your max_connections headroom.

The PostgreSQL-side view via pg_stat_activity tells you what your database sees, as opposed to what your application thinks. If your application reports 5 connections but pg_stat_activity shows 15, you have a leak — connections are being created but not tracked by the pool. This typically happens after a fork without reconnection, or when code bypasses the pool by calling PG.connect directly.

An honest word about when this is not the right approach

I should be forthcoming about the limits of this entire exercise, because a waiter who overstates his case is no waiter at all.

ActiveRecord outside Rails is a reasonable choice when you already know ActiveRecord, your team is productive in it, and you want a familiar query interface for a non-Rails application. It is a less reasonable choice in a few specific scenarios that deserve honest acknowledgment.

If your application is read-heavy with simple queries, ActiveRecord's connection pool overhead, object instantiation, and callback system are machinery you are paying for and not using. The Sequel gem is lighter, faster for simple queries, and handles connection pooling with fewer surprises outside Rails. Sequel was designed for standalone use from the beginning. ActiveRecord was designed for Rails and adapted for standalone use later. The difference shows in the edges — and this article is entirely about the edges.

If your application is a simple script that runs once and exits, the pg gem alone — with no ORM at all — is often sufficient. A direct PG.connect, a raw SQL query, and conn.close. No pool. No reaper. No configuration matrix. The overhead of setting up ActiveRecord's connection pool for a script that runs for 10 seconds is not the code itself — it is the cognitive overhead of knowing which 8 options to set and which to leave as defaults.

If your application runs behind PgBouncer or another connection pooler, you now have two connection pools — ActiveRecord's client-side pool and PgBouncer's server-side pool. They do not coordinate. ActiveRecord thinks it has 10 connections. PgBouncer thinks it has 20. The actual PostgreSQL backend connections are governed by PgBouncer's configuration, not ActiveRecord's. This works, but it is a layer of indirection that produces confusing diagnostics when something goes wrong. Disabling prepared statements and keeping the ActiveRecord pool smaller than PgBouncer's server pool are the essential rules. But two pools are inherently harder to reason about than one.

None of this means ActiveRecord outside Rails is wrong. It means it comes with responsibilities that Rails normally shoulders. This article exists so you can shoulder them confidently.

What if the connection pool managed itself?

The configuration burden described above — pool sizes per worker, reaper frequencies, idle timeouts, clear_active_connections! in every middleware, with_connection in every thread, mental arithmetic to stay under max_connections, prepared statement compatibility, fork safety hooks — exists because each application process manages its own connection pool independently. Every process is an island, convinced it is the only one that needs database access.

Gold Lapel changes this equation. It sits between your application and PostgreSQL as a transparent proxy with built-in session-mode connection pooling (default pool size: 20). Your application connects to Gold Lapel. Gold Lapel maintains the upstream connections to PostgreSQL.

ActiveRecord with Gold Lapel
# With Gold Lapel: simplified connection management
#
# GL sits between your app and PostgreSQL, managing a server-side
# connection pool (default: 20 connections). Your app connects to GL
# instead of directly to PostgreSQL.

ActiveRecord::Base.establish_connection(
  adapter:  'postgresql',
  host:     'localhost',
  port:     6432,              # Gold Lapel's port
  database: 'myapp',
  username: 'myapp',
  password: ENV['DATABASE_PASSWORD'],
  pool:     5,                 # app-side pool can be smaller
  prepared_statements: false,  # disable for pooler compatibility
  reaping_frequency: 10,
  idle_timeout: 300,
)

# What changes with GL in the picture:
#
# Before GL (direct to PostgreSQL):
#   3 Puma workers x 5 threads x pool 5 = 15 connections
#   4 app servers x 15 = 60 PostgreSQL backend connections
#   Add sidekiq, cron jobs, console sessions... easily 80-100
#   PostgreSQL max_connections: 100 (default). Cutting it very close.
#
# With GL:
#   Same 4 app servers still open 60 connections — to GL
#   GL maintains 20 connections to PostgreSQL
#   60 app connections share 20 backend connections
#   Headroom restored. Scaling adds app connections, not PG load.

The practical effect: your application-side pool can be smaller because Gold Lapel absorbs the multiplexing. You no longer need to coordinate pool sizes across Puma workers, Sidekiq processes, and cron jobs to stay under max_connections. Scaling from 2 app servers to 8 adds application connections to Gold Lapel, not backend connections to PostgreSQL. The prepared statement problem disappears because Gold Lapel handles statement caching at the proxy layer — your application can disable prepared_statements without sacrificing the parse-time savings.

The establish_connection call still matters. The after block in Sinatra still matters. The with_connection pattern in background jobs still matters. Fork safety still matters. Application-side connection hygiene does not go away. But the consequences of imperfect configuration become far less severe when there is a properly managed pool absorbing the impact between your application and your database. A pool size of 5 that should have been 10 is a pool exhaustion incident when connecting directly to PostgreSQL. It is a minor latency increase when connecting through Gold Lapel.

The complete checklist

If you take nothing else from this article, take this. Every non-Rails ActiveRecord application connecting to PostgreSQL should verify the following.

1. Boot-time verification. Call ActiveRecord::Base.connection.execute('SELECT 1') after establish_connection. Fail fast if PostgreSQL is unreachable.

2. Pool size matches concurrency. Puma threads per worker, Sidekiq concurrency, or manual thread count. The pool must be at least as large as the maximum concurrent database-using threads in the process.

3. Reaper is running. Set reaping_frequency: 10 explicitly. Verify with ActiveRecord::Base.connection_pool.reaper.frequency.

4. Idle timeout is set. idle_timeout: 300 for most applications. This prevents long-running processes from holding idle connections indefinitely.

5. Connection cleanup after every request. clear_active_connections! in an after hook (Sinatra) or ensure clause in middleware (Rack/Grape).

6. with_connection for all non-request code. Background jobs, threads, daemons, scripts. No exceptions.

7. Fork safety hooks. If using Unicorn, Resque, or any other forking mechanism, reconnect after fork.

8. Prepared statements disabled if using a connection pooler. PgBouncer, pgcat, Gold Lapel — any intermediary means prepared_statements: false.

9. Connection arithmetic documented. Total connections across all processes, all servers, all background jobs. Written down. Compared against max_connections.

ActiveRecord is a fine guest. It simply requires a proper host. With or without Rails, with or without Gold Lapel, the connection pool deserves the same attention you give your queries and your indexes. Preferably before the first ConnectionTimeoutError, not after.

Frequently asked questions

Terms referenced in this article

While the matter of pool sizing is fresh in your mind, there is a room in the manor devoted to this very subject. I have prepared a guide to Rails connection pool sizing with Solid Queue that addresses the Puma and background worker calculations in considerably more detail.