ActiveRecord Callbacks Are Tripling Your PostgreSQL Query Count. Permit Me to Show You.
You wrote user.update!(name: "Stephen"). PostgreSQL received six queries. Allow me to account for each one.
Good evening. Your save is not what it appears to be.
I must inform you of something about your ActiveRecord models that may be uncomfortable. When you call user.save!, you believe you are executing one query. An UPDATE. Perhaps an INSERT. One database operation, brisk and purposeful.
You are not.
ActiveRecord's lifecycle callbacks — those tidy declarations at the top of your model — each carry a cost denominated in PostgreSQL queries. A validates :email, uniqueness: true fires a SELECT before every save. A counter_cache: true triggers an UPDATE on the parent record. A touch: true cascades upward through every ancestor in the chain. Each callback is individually reasonable. Collectively, they are tripling your query count.
I have audited the most common callbacks, measured what they actually send to PostgreSQL, and run EXPLAIN ANALYZE on every one. The findings are worth your attention.
And I should note — before we proceed — that this is not an argument against callbacks. ActiveRecord callbacks exist for legitimate reasons. They enforce data integrity, maintain denormalized caches, and keep related records in sync. The problem is not that they exist. The problem is that their costs are invisible. You declare them in Ruby. The bill arrives in PostgreSQL. And nobody is reading the bill.
What does a single save actually generate?
Consider a model that is, by Rails standards, unremarkable. Two uniqueness validations, a belongs_to with touch and counter_cache, a couple of after-save hooks. Nothing exotic. Nothing a code reviewer would question.
class User < ApplicationRecord
belongs_to :organization, touch: true, counter_cache: true
has_many :posts, dependent: :destroy
validates :email, uniqueness: { scope: :organization_id }
validates :username, uniqueness: true
before_save :normalize_email
after_save :update_search_index
after_commit :sync_to_crm, on: :update
end Nine lines of declarations. Each one reads as a simple intent: keep this field unique, keep the parent timestamp fresh, keep the count accurate. The model is clean. Readable. A senior engineer would approve it without comment.
Now watch what happens when you change the user's name:
-- You write this in your controller:
-- user.update!(name: "Stephen")
--
-- ActiveRecord sends ALL of this to PostgreSQL:
-- 1. BEGIN transaction
BEGIN
-- 2. validates uniqueness of email (scoped)
SELECT 1 AS one FROM "users"
WHERE "users"."email" = 'stephen@example.com'
AND "users"."organization_id" = 7
AND "users"."id" != 42
LIMIT 1
-- 3. validates uniqueness of username
SELECT 1 AS one FROM "users"
WHERE "users"."username" = 'sgibson'
AND "users"."id" != 42
LIMIT 1
-- 4. The actual UPDATE you asked for
UPDATE "users"
SET "name" = 'Stephen', "updated_at" = '2025-03-05 14:22:31'
WHERE "users"."id" = 42
-- 5. touch: true on belongs_to :organization
UPDATE "organizations"
SET "updated_at" = '2025-03-05 14:22:31'
WHERE "organizations"."id" = 7
-- 6. counter_cache update (no-op on name change — Rails fires
-- a +0 update here; actual increments happen on create/destroy)
UPDATE "organizations"
SET "users_count" = COALESCE("users_count", 0) + 0,
"updated_at" = '2025-03-05 14:22:31'
WHERE "organizations"."id" = 7
-- 7. COMMIT
COMMIT
-- That is 6 queries for a name change. Six queries. One BEGIN, two uniqueness checks, the UPDATE you asked for, a touch on the parent, a counter cache update, and a COMMIT. The two after-save callbacks (update_search_index and sync_to_crm) may generate additional queries of their own — I have excluded them here because their cost depends entirely on implementation.
This is not pathological. This is a Tuesday.
If update_search_index fires even a single query, and sync_to_crm does the same, you are at eight queries for a name change. If that model is saved on every request — say, to update a last_seen_at timestamp — you are executing eight queries per request that are invisible to anyone reading the controller code. The controller says user.update!. PostgreSQL hears a small conference.
The callback lifecycle: where your queries actually live
To understand why callbacks generate the queries they do, you need to see where they sit in ActiveRecord's save lifecycle. This is the full chain for an update operation:
# The full lifecycle of user.update!(name: "Stephen"):
#
# 1. before_validation
# 2. after_validation
# 3. before_save
# └── normalize_email (your callback)
# 4. before_update
# 5. ┌─ QUERIES INSIDE TRANSACTION ────────────────────┐
# │ BEGIN │
# │ SELECT 1 AS one ... (validates email) │
# │ SELECT 1 AS one ... (validates username) │
# │ UPDATE users SET name = ... (the actual save) │
# │ UPDATE organizations SET updated_at (touch) │
# │ UPDATE organizations SET users_count (counter) │
# └──────────────────────────────────────────────────┘
# 6. after_update
# 7. after_save
# └── update_search_index (your callback — inside txn)
# 8. COMMIT
# 9. after_commit
# └── sync_to_crm (your callback — outside txn)
#
# Steps 1-8 hold the transaction open.
# The longer steps 5-7 take, the longer the locks are held. The critical observation: everything between BEGIN and COMMIT holds the transaction open. The longer your callbacks take, the longer the row-level locks are held. A before_save callback that performs a slow external API call holds the lock on your user row for the entire duration of that call. Every other request trying to update the same user waits.
This is why after_commit exists — it runs after the transaction has already committed, releasing the locks first. But after_commit brings its own complications:
# after_save runs INSIDE the transaction:
class Order < ApplicationRecord
after_save :notify_warehouse
end
# If notify_warehouse raises an error, the entire transaction rolls back.
# Your order update is lost because a notification failed.
# after_commit runs AFTER the transaction succeeds:
class Order < ApplicationRecord
after_commit :notify_warehouse, on: :update
end
# The order is saved regardless of what happens in the callback.
# But: the callback now runs outside the transaction, so it cannot
# be rolled back. If it fails, you need a separate retry mechanism.
# Neither is universally better. The choice depends on whether
# the callback's failure should prevent the save. The choice between after_save and after_commit is a genuine trade-off. after_save can roll back with the transaction if it fails, but holds the transaction open longer. after_commit releases the transaction immediately but cannot be rolled back. Neither is wrong. Both have costs that manifest in PostgreSQL — either as longer-held locks or as potential data inconsistencies that need separate cleanup.
The full callback cost table
I have catalogued the query cost of every common ActiveRecord callback pattern. This is the table I wish someone had published years ago.
| Callback | Fires on | Queries generated | Risk |
|---|---|---|---|
| validates uniqueness | Every save (create + update) | 1 SELECT per field | Seq scan without index |
| validates uniqueness (scoped) | Every save | 1 SELECT per field | Needs compound index |
| counter_cache: true | Create, destroy, reparent | 1 UPDATE per association | Lock contention on parent row |
| touch: true | Every save on child | 1 UPDATE per ancestor | Cascades up the chain |
| dependent: :destroy | Parent destroy | 1 SELECT + 1 DELETE per batch | Recursive — can cascade |
| after_save (custom) | Every save | Depends on implementation | Unbounded — can fire queries, enqueue jobs, anything |
| after_commit | After transaction commits | Depends on implementation | Often triggers external API calls or additional DB writes |
The arithmetic is straightforward. A model with two uniqueness validations, a counter cache, and a touch generates five extra queries on every save. If that model also has a custom after_save that fires two queries of its own, you are at eight. Your controller action says user.update!. PostgreSQL sees a small transaction.
The table reveals something else: the risk column is entirely about PostgreSQL, not Ruby. Sequential scans, lock contention, cascade depth — these are database problems caused by application-level declarations. The person writing counter_cache: true may never have heard of row-level locks. The person debugging the lock contention in production may never have looked at the model file. The gap between the declaration and the consequence is the entire problem.
"The gap between what the ORM expresses and what PostgreSQL executes is where performance problems live. Not in the database. Not in the application. In the translation."
— from You Don't Need Redis, Chapter 3: The ORM Tax
How validates_uniqueness_of becomes a sequential scan
This is the callback that deserves the most scrutiny, because it fires on every save — both creates and updates — and its performance depends entirely on whether the right index exists.
Here is what validates_uniqueness_of sends to PostgreSQL, and what happens with and without an index:
-- What validates :email, uniqueness: true generates before every save:
EXPLAIN ANALYZE
SELECT 1 AS one FROM "users"
WHERE "users"."email" = 'stephen@example.com'
AND "users"."id" != 42
LIMIT 1;
-- Without an index on email:
-- Seq Scan on users (cost=0.00..4128.00 rows=1 width=4)
-- (actual time=18.342..18.342 rows=0 loops=1)
-- Filter: ((id <> 42) AND (email = 'stephen@example.com'))
-- Rows Removed by Filter: 150000
-- Planning Time: 0.089 ms
-- Execution Time: 18.371 ms
-- With an index on email:
-- Index Scan using index_users_on_email on users
-- (cost=0.42..8.44 rows=1 width=4)
-- (actual time=0.024..0.024 rows=0 loops=1)
-- Index Cond: (email = 'stephen@example.com')
-- Filter: (id <> 42)
-- Planning Time: 0.082 ms
-- Execution Time: 0.041 ms
-- 448x faster. Same callback. Same save. Without an index, every save triggers a full table scan. On a 150,000-row users table, that is 18ms of CPU time just to confirm the email is unique — before the actual UPDATE even begins. With an index, the same check takes 0.04ms. That is a 448x difference on the same callback, the same save, the same line of Ruby.
And this callback fires on every save. Not just creates — updates too. Changing a user's name triggers the email uniqueness check. Changing their avatar triggers the email uniqueness check. ActiveRecord does not inspect which attributes changed before running validations. It runs all of them. Every time.
The scoped uniqueness trap
Scoped uniqueness validations are worse, because they need a compound index that Rails migrations do not always create:
# Scoped uniqueness generates a compound query:
validates :slug, uniqueness: { scope: :tenant_id }
# Before every save:
SELECT 1 AS one FROM "articles"
WHERE "articles"."slug" = 'hello-world'
AND "articles"."tenant_id" = 12
AND "articles"."id" != 99
LIMIT 1
# This needs a compound index:
# CREATE INDEX idx_articles_tenant_slug
# ON articles (tenant_id, slug);
#
# Without it, you get a sequential scan on every save
# for every article in a multi-tenant app. Multi-tenant applications are especially vulnerable here. Every model with uniqueness: { scope: :tenant_id } needs a compound index on (tenant_id, column). If any of those indexes are missing, you are running sequential scans on every save, for every tenant, on every request.
I have seen multi-tenant Rails applications with eight or nine models that use scoped uniqueness validations. Of those nine, four had the correct compound index. The other five were performing sequential scans — 18ms each, five times per request, on every request. That is 90ms of pure scanning that could have been 0.2ms. The application was "slow" and nobody could explain why, because no single query was slow enough to trigger an alert. The sequential scans were hiding in plain sight, distributed across five models, firing silently on every save.
An honest note on validates_uniqueness_of
I should be forthcoming about something: validates_uniqueness_of is not actually sufficient for preventing duplicates. It performs a SELECT to check for existing records, then proceeds with the INSERT or UPDATE. Between the SELECT and the write, another request can insert a duplicate. This is a well-documented race condition — the Rails documentation itself acknowledges the limitation explicitly.
The authoritative uniqueness guarantee is always the database index — a UNIQUE constraint in PostgreSQL. The Rails validation provides a friendly error message. The database constraint provides the actual guarantee. You need both: the constraint for correctness, the validation for user experience. But if you had to choose one, choose the constraint. It cannot be raced.
How touch: true cascades through your entire object graph
The touch: true option on belongs_to updates the parent's updated_at timestamp whenever the child is saved. This is useful for cache invalidation — Rails' Russian doll caching depends on it.
It is also recursive.
class Comment < ApplicationRecord
belongs_to :post, touch: true
end
class Post < ApplicationRecord
belongs_to :user, touch: true
has_many :comments
end
class User < ApplicationRecord
belongs_to :organization, touch: true
has_many :posts
end
# When you save a comment:
comment.update!(body: "Fixed typo")
# PostgreSQL receives:
# 1. UPDATE comments SET body = '...', updated_at = '...' WHERE id = 831
# 2. UPDATE posts SET updated_at = '...' WHERE id = 214
# 3. UPDATE users SET updated_at = '...' WHERE id = 42
# 4. UPDATE organizations SET updated_at = '...' WHERE id = 7
#
# Four tables touched. One typo fixed. Four UPDATE statements. Four rows locked. Four updated_at columns rewritten. Because someone fixed a typo in a comment.
The performance cost is not just the queries themselves — it is the row-level locks. Each UPDATE acquires an exclusive lock on that row for the duration of the transaction. If your application processes comments concurrently (and it does), those locks on the organization row will serialize. Two users editing comments in the same organization at the same time? One waits for the other's transaction to commit.
In high-throughput applications, touch: true chains longer than two levels are a lock contention factory. I have seen touch cascades contribute measurably to p99 latency on applications doing fewer than 500 requests per second. The mechanism is simple: the organization row becomes a hot row. Every comment save in the organization touches it. Every touch acquires an exclusive lock. Every lock forces subsequent touches to queue. The queue grows with concurrency.
When touch: true earns its keep
I would be a poor guide if I only showed you the costs. touch: true exists because Russian doll caching is genuinely effective, and it depends on accurate updated_at timestamps. If your application uses fragment caching heavily — and many well-built Rails applications do — the touch cascade is the mechanism that keeps those caches correct.
The question is not "should I use touch: true" but "how deep should the chain go?" A single level — comment touches post — is almost always fine. Two levels — comment touches post, post touches user — is reasonable if the user record is not a hot row. Three levels — comment touches post, post touches user, user touches organization — is where I begin to counsel restraint. The organization row is shared by every user in the organization. It will become contended.
The alternative at depth three is typically a background job that updates the ancestor's timestamp asynchronously, outside the original transaction. This releases the lock on the organization row immediately and updates it a few milliseconds later. The cache is stale for a moment. In practice, nobody notices.
counter_cache: the deceptively expensive counter
Counter caches are seductive. Instead of user.posts.count hitting the database with a COUNT query on every page load, you maintain a denormalized posts_count column on the user. Fast reads. No aggregate queries. The denormalized count is always accurate because ActiveRecord updates it automatically.
class Post < ApplicationRecord
belongs_to :user, counter_cache: true
belongs_to :category, counter_cache: true
has_many :comments
end
# Creating a post:
Post.create!(title: "Hello", user: current_user, category: tech)
# Generates:
# 1. INSERT INTO posts (title, user_id, category_id, ...) VALUES (...)
# 2. UPDATE users SET posts_count = COALESCE(posts_count, 0) + 1 WHERE id = 42
# 3. UPDATE categories SET posts_count = COALESCE(posts_count, 0) + 1 WHERE id = 3
#
# Deleting that post later? Same thing in reverse — 3 queries. Each counter_cache declaration adds an UPDATE to the parent on create and destroy. Two counter caches means two extra queries on every create, two on every destroy, and — the part people forget — four on a reparent (decrement old parent, increment new parent, for each counter). The parent row locks apply here as well. A frequently-created child with counter_cache: true on a shared parent will create lock contention on that parent row.
I have prepared a more thorough treatment of this subject — the counter cache contention guide explores the lock mechanics in depth. The short version: if the parent row receives more than a few counter updates per second, the row lock becomes the bottleneck, and your write throughput is capped by the transaction duration of each counter update.
The alternatives — counter_culture gem with background counting, or simply running COUNT with an index and caching the result at the application level — trade strict accuracy for reduced lock contention. For a social media application where the comment count on a popular post can change dozens of times per second, the approximation is nearly always the correct choice.
dependent: :destroy and the recursive cascade
class User < ApplicationRecord
has_many :posts, dependent: :destroy
has_many :comments, dependent: :destroy
has_many :notifications, dependent: :destroy
end
# user.destroy! generates:
# 1. SELECT * FROM posts WHERE user_id = 42
# 2. DELETE FROM posts WHERE id IN (101, 102, 103, ...)
# -- But wait — if Post also has dependent: :destroy on comments:
# 3. SELECT * FROM comments WHERE post_id = 101
# 4. DELETE FROM comments WHERE id IN (...)
# 5. SELECT * FROM comments WHERE post_id = 102
# 6. DELETE FROM comments WHERE id IN (...)
# -- ... for every post
# 7. SELECT * FROM comments WHERE user_id = 42
# 8. DELETE FROM comments WHERE id IN (...)
# 9. SELECT * FROM notifications WHERE user_id = 42
# 10. DELETE FROM notifications WHERE id IN (...)
#
# A user with 50 posts and 200 comments?
# Easily 60+ queries for a single destroy. The critical detail: dependent: :destroy loads each child record into memory and calls destroy on it individually, which fires all of that record's callbacks. If the child also has dependent: :destroy, the cascade continues. This is by design — it ensures callbacks run — but it means a single user.destroy! can generate dozens or hundreds of queries depending on the depth and breadth of the association tree.
A user with 50 posts, each post with 10 comments, and dependent: :destroy on both associations: 1 SELECT for posts, 50 DELETEs for posts, 50 SELECTs for comments (one per post), and up to 500 DELETEs for comments. If the comments have their own callbacks — counter cache on the post, touch on the post — each of those 500 destroys triggers additional queries. The numbers compound.
The alternatives
If you do not need callbacks on the children, dependent: :delete_all issues a single SQL statement:
class User < ApplicationRecord
has_many :posts, dependent: :destroy # Needs callbacks? Keep :destroy.
has_many :notifications, dependent: :delete_all # No callbacks needed? One query.
end
# With dependent: :delete_all:
# DELETE FROM notifications WHERE user_id = 42
#
# One query. No instantiation. No callbacks.
# The trade-off: after_destroy hooks on Notification will NOT fire.
# If you have no such hooks, this is strictly better. One query instead of N*2. The trade-off is explicit: no callbacks fire. For many use cases — notifications, session records, activity logs — that is exactly what you want. The child record has no meaningful cleanup to perform. It simply needs to not exist.
For the strongest guarantee, consider the database-level approach:
-- The database-level alternative: ON DELETE CASCADE
ALTER TABLE notifications
ADD CONSTRAINT fk_notifications_user
FOREIGN KEY (user_id)
REFERENCES users(id)
ON DELETE CASCADE;
-- Now when users(42) is deleted, PostgreSQL handles the cascade.
-- No Ruby. No SELECT. No instantiation. No N+1.
-- The database does what databases do best: enforce referential integrity.
-- The migration:
-- add_foreign_key :notifications, :users, on_delete: :cascade A foreign key with ON DELETE CASCADE handles the deletion entirely within PostgreSQL. No Ruby instantiation, no N+1, no callback chain. The database enforces referential integrity the way databases have been enforcing referential integrity for decades — at the storage layer, atomically, in a single operation.
The honest trade-off: you lose all Ruby-level callbacks on the child records. If those callbacks do important work — sending emails, cleaning up files, updating search indexes — you need another mechanism to handle those side effects. But if the callbacks are simply maintaining other denormalized data in the database, PostgreSQL triggers can often do the same work more efficiently.
The compound effect: a real-world audit
Here is a model I encounter in variations across Rails applications. It combines several callback patterns that are individually defensible and collectively alarming.
# A common pattern: audit logging via callbacks
class Order < ApplicationRecord
belongs_to :customer, touch: true, counter_cache: true
has_many :line_items, dependent: :destroy
validates :reference, uniqueness: { scope: :customer_id }
after_save :create_audit_log
after_save :recalculate_customer_stats
private
def create_audit_log
AuditLog.create!(
auditable: self,
action: persisted? ? 'update' : 'create',
user: Current.user,
changes: saved_changes.to_json
)
end
def recalculate_customer_stats
customer.update!(
total_spent: customer.orders.sum(:total),
last_order_at: customer.orders.maximum(:created_at)
)
end
end
# order.update!(status: "shipped") now generates:
#
# 1. SELECT for validates uniqueness of reference
# 2. UPDATE orders (the actual change)
# 3. UPDATE customers SET updated_at (touch: true)
# 4. UPDATE customers SET orders_count (counter_cache)
# 5. INSERT INTO audit_logs (after_save callback)
# 6. SELECT SUM(total) FROM orders WHERE customer_id = 5
# 7. SELECT MAX(created_at) FROM orders WHERE customer_id = 5
# 8. UPDATE customers SET total_spent, last_order_at (recalculate)
#
# 8 queries. You wrote one line of Ruby. Eight queries to update a status field. Each one is doing something reasonable. The uniqueness check prevents duplicate references. The touch keeps the cache fresh. The counter cache maintains a denormalized count. The audit log records the change. The stats recalculation keeps the dashboard accurate.
The problem is not that any of these are wrong. The problem is that nobody counted. Nobody opened the PostgreSQL log and watched what order.update!(status: "shipped") actually does. The code review approved each callback individually. Nobody reviewed the aggregate.
Reducing the compound cost
Look at the recalculate_customer_stats callback. It runs on every save. But does updating the order status from "pending" to "shipped" change total_spent or last_order_at? It does not. Those values only change when an order is created or when the total column changes.
# Instead of recalculating on every save:
class Order < ApplicationRecord
after_save :recalculate_customer_stats # Fires on EVERY update
def recalculate_customer_stats
customer.update!(
total_spent: customer.orders.sum(:total),
last_order_at: customer.orders.maximum(:created_at)
)
end
end
# Consider: does the status change actually affect these stats?
# Changing status from "pending" to "shipped" does not change
# total_spent or last_order_at. Those only change on create
# or when the total column changes.
# A more precise version:
class Order < ApplicationRecord
after_save :recalculate_customer_stats,
if: -> { saved_change_to_total? || previously_new_record? }
def recalculate_customer_stats
customer.update!(
total_spent: customer.orders.sum(:total),
last_order_at: customer.orders.maximum(:created_at)
)
end
end
# Now order.update!(status: "shipped") generates zero
# recalculation queries. The callback is still there —
# it simply knows when to stay quiet. That single change — adding an if condition — eliminates three queries (two SELECTs and one UPDATE) from every status change. The callback still exists. It still fires when it matters. It simply knows when to stay quiet.
This principle applies broadly:
# Callbacks that fire unconditionally:
after_save :update_search_index # Fires on EVERY save
after_save :recalculate_stats # Fires on EVERY save
after_save :sync_to_external_service # Fires on EVERY save
# Callbacks that fire only when relevant:
after_save :update_search_index,
if: -> { saved_change_to_name? || saved_change_to_bio? }
after_save :recalculate_stats,
if: -> { saved_change_to_total? || previously_new_record? }
after_commit :sync_to_external_service, on: :create
# Only on creation — updates are synced via a background job
# The difference in a bulk import of 10,000 records:
# Unconditional: 30,000 extra callback queries
# Conditional: ~200 (only the ones that actually changed relevant fields) The difference in a bulk operation is staggering. An import of 10,000 records with three unconditional after_save callbacks generates 30,000 extra queries. The same import with properly conditioned callbacks might generate a few hundred. Same Ruby. Same models. Dramatically different PostgreSQL workload.
The escape hatches: when to bypass callbacks entirely
ActiveRecord provides methods that skip the entire callback chain. They are sometimes regarded with suspicion, as though bypassing callbacks is inherently dangerous. It is not. It is a tool, and like all tools, its value depends on whether you are reaching for it deliberately.
# ActiveRecord callbacks fire on save, update, create, destroy.
# They do NOT fire on:
User.where(organization_id: 7).update_all(active: false)
# One query: UPDATE users SET active = false WHERE organization_id = 7
# Zero callbacks. Zero validations. Zero touch. Zero counter_cache.
User.where(last_login_at: ..1.year.ago).delete_all
# One query: DELETE FROM users WHERE last_login_at < '2024-03-05'
# Zero callbacks. Zero dependent: :destroy cascades.
# These are the escape hatches. They bypass the entire callback
# chain and send exactly one query to PostgreSQL.
#
# The trade-off is explicit: no callbacks means no audit logs,
# no cache invalidation, no counter updates, no search reindex.
# If your application depends on those side effects, you must
# handle them manually.
#
# But for administrative operations, data migrations, and bulk
# updates where you control the full context — these methods
# are not just faster. They are honest about what they do. update_all and delete_all send a single SQL statement to PostgreSQL. No instantiation, no callbacks, no validations, no touch, no counter cache. One query. For administrative operations — deactivating users, purging old records, resetting flags — these methods are not just faster. They are clearer about what they do. The code says "update these rows" and that is exactly what happens. No hidden queries, no implicit side effects, no transaction surprises.
The danger is using them when your callbacks do important work that you have forgotten about. If after_save :sync_to_search_index exists and you use update_all, your search index falls out of sync. This is a real risk. The mitigation is simple: before reaching for update_all, read the model's callbacks. All of them. Decide explicitly which side effects you are willing to forgo.
An honest word about this entire argument
I should be forthcoming about the limits of this analysis, because a waiter who overstates his case is no waiter at all.
For many Rails applications, callback-generated queries are not the bottleneck. If your application serves moderate traffic — a few hundred requests per minute — and your tables are properly indexed, the extra queries from callbacks add microseconds to each request. The six queries from a User save might total 2ms on a well-indexed database. That is not nothing, but it is not the thing making your application slow.
The callbacks become expensive under specific conditions: high write throughput, missing indexes, deep touch chains, hot parent rows, and — most commonly — the compound effect where multiple callback patterns combine on a single model. If your model has one uniqueness validation and nothing else, this article is informational, not urgent. If your model has the characteristics of the Order model above — multiple validations, counter caches, touch, custom after_save hooks — the compound cost is worth measuring.
The other honest note: ActiveRecord callbacks are not unique in this behavior. Django signals, Eloquent model events, SQLAlchemy event listeners, and Hibernate entity listeners all have analogous patterns. The specifics differ — Django's uniqueness validation is handled differently, Eloquent's touch implementation has its own characteristics — but the fundamental dynamic is the same. ORM lifecycle hooks generate database queries that are invisible to the developer who triggered the save. This is an ORM pattern problem, not a Rails problem.
Rails simply has the most widely-used, most feature-rich, and most thoroughly-documented callback system, which makes it the clearest example. And the clearest target.
How to audit your own callback query cost
Three approaches, in order of increasing thoroughness.
1. Count queries in your test suite
Rails provides ActiveSupport::Notifications for instrumenting SQL queries. Add a counter to your most critical model specs:
# Count queries per operation in your test suite:
class QueryCounter
attr_reader :count, :queries
def initialize
@count = 0
@queries = []
end
def call(_name, _start, _finish, _id, payload)
return if payload[:name] == "SCHEMA"
@count += 1
@queries << payload[:sql]
end
end
# Usage in RSpec:
it "generates a reasonable number of queries on save" do
counter = QueryCounter.new
ActiveSupport::Notifications.subscribed(
counter.method(:call), "sql.active_record"
) do
user.update!(name: "Stephen")
end
expect(counter.count).to be <= 6
# If this number creeps up, someone added a callback.
# The test will tell you before production does.
end This is the most valuable approach because it catches regressions. When someone adds a new callback to the User model six months from now, the test will fail. It will tell you "this save used to generate 6 queries and now it generates 9." That is a conversation worth having before the code reaches production.
2. Enable PostgreSQL query logging in development
# In config/environments/development.rb:
ActiveRecord::Base.logger = Logger.new(STDOUT)
# Or more granular — log query time and source:
ActiveSupport::Notifications.subscribe("sql.active_record") do |*args|
event = ActiveSupport::Notifications::Event.new(*args)
if event.duration > 1 # only queries over 1ms
Rails.logger.debug(
"[SQL #{event.duration.round(1)}ms] #{event.payload[:sql]}"
)
end
end
# Then perform a single save in the console:
# user.update!(name: "Stephen")
# Count the lines. Each one is a query your save generated. Set log_min_duration_statement = 0 in your development postgresql.conf to log every query, or use the ActiveSupport notification above for more granular control. Then perform a single save and count the lines. The results are frequently educational.
I recommend doing this exercise once per quarter on your three most frequently-saved models. Open a Rails console, perform a single save, and count the queries. If the number has grown since last quarter, investigate. Callbacks accumulate like household staff — each one hired for a good reason, and suddenly you have a payroll problem.
3. Check pg_stat_statements in production
-- Find the hidden callback queries in production:
SELECT query,
calls,
mean_exec_time,
total_exec_time
FROM pg_stat_statements
WHERE query LIKE 'SELECT 1 AS one FROM%'
OR query LIKE 'UPDATE%updated_at%'
ORDER BY total_exec_time DESC
LIMIT 20;
-- The "SELECT 1 AS one" pattern is the unmistakable
-- signature of validates_uniqueness_of.
-- High call counts on "UPDATE ... SET updated_at"
-- with no other columns changing? That is touch: true. The SELECT 1 AS one FROM pattern is the calling card of validates_uniqueness_of. If you see it with a high call count and no corresponding index, you have found a sequential scan running on every save. The UPDATE ... SET updated_at pattern with no other columns changing is touch: true.
Sort by total_exec_time. The callbacks that cost you the most will surface immediately. I have never run this query on a production Rails application without finding at least one surprise — a uniqueness validation without an index, a touch chain deeper than anyone realized, or a custom after_save that fires far more often than its author intended.
What Gold Lapel does with callback-generated queries
Gold Lapel sits between your Rails application and PostgreSQL. It sees every query — the ones you wrote and the ones ActiveRecord wrote on your behalf. It does not distinguish between the two, because PostgreSQL does not distinguish between the two. A query is a query.
This is precisely where callback-generated queries become interesting. The SELECT 1 AS one FROM users WHERE email = $1 that validates_uniqueness_of generates? Gold Lapel sees it hit the same column thousands of times per hour. If there is no index, Gold Lapel creates one. The counter_cache UPDATE that locks the parent row? Gold Lapel tracks the lock wait times and surfaces contention patterns you would never notice in application-level monitoring.
Your callbacks are not going away — they exist for good reasons. But the queries they generate deserve the same indexing attention as the queries you write deliberately. Gold Lapel provides that attention automatically, for every query, whether it came from your controller or from line 47 of a callback buried three associations deep.
Add gem "goldlapel-rails" to your Gemfile and the hidden queries get the same optimization as the visible ones. No query changes — Gold Lapel auto-patches ActiveRecord at boot. Because, frankly, the hidden queries have always needed the attention more.
Frequently asked questions
Terms referenced in this article
One further thought, if I may. The counter_cache contention discussed above has a companion piece that takes up where this leaves off — I have written about the counter cache showdown in PostgreSQL, including the materialized view alternative that eliminates row-level lock contention entirely.