← Node.js & Edge Frameworks

Bun ORM and PostgreSQL: The SQL-First ORM That Respects Your Database

Most Go ORMs generate SQL you would never write by hand. Bun generates SQL you might actually recognize.

The Waiter of Gold Lapel · Updated Mar 20, 2026 Published Mar 5, 2026 · 38 min read
The illustrator was asked to draw an ORM with low overhead. They submitted a blank canvas. We are told this was intentional.

Good evening. I understand you are dissatisfied with your ORM's overhead.

The Go ecosystem has a complicated relationship with ORMs. A significant portion of the community considers them an affront to the language's philosophy of simplicity and directness. "Just write SQL," they say, and they are not wrong — raw pgx is fast, explicit, and gives you full control. It is also the position I would take, were I advising a team building a 10-table service with well-understood access patterns and engineers who genuinely enjoy writing rows.Scan(&a, &b, &c, &d, &e, &f) for every query variation.

But raw SQL at scale means managing dozens of scan functions, hand-building every INSERT with the right number of placeholders, and writing your own relation-loading logic. For a 10-table application, this is manageable. For a 60-table application with 200 endpoints, it becomes its own form of complexity — just wearing different clothes. I have encountered production codebases with 4,000 lines of hand-written scan functions. The engineers who built them were competent. The engineers who maintained them were exhausted.

Bun takes a position I find refreshing: it is an ORM that would rather be a query builder. Its creators at Uptrace describe it as "SQL-first," which means the API follows SQL structure rather than hiding it behind method chains that obscure what actually executes. You write .Where("status = ?", "pending"), and the generated SQL contains WHERE status = 'pending'. The correspondence is not approximate. It is structural.

The overhead compared to raw pgx is less than 5% for most read operations and less than 15% for writes. That is not a marketing claim — it is a measurable property of how Bun works internally. Bun inspects your struct fields once, at model registration time, and caches the result as a flat lookup table. GORM inspects them at every query, through Go's reflect package, building intermediate representations that allocate memory and burn CPU cycles. The difference is architectural, not incidental, and it is the reason this article exists.

GORM dominates Go ORM adoption by a wide margin. It is the default choice, the one with the most Stack Overflow answers, the one that appears in every tutorial. It is also the one that reflects every struct field at runtime, builds SQL through string concatenation, and has a Preload implementation that loads entire relation sets into memory before filtering. These are design decisions, not bugs, and they carry performance consequences that become visible at scale.

If you are here, you have likely already noticed those consequences. Allow me to show you the alternative.

Model definitions: struct tags that map to real SQL

Bun uses struct tags to define the mapping between Go types and PostgreSQL tables. This is not unusual — GORM does the same. What differs is the fidelity. Bun's tags map directly to PostgreSQL column attributes: types, defaults, constraints. There is no intermediate schema representation. What you write is what PostgreSQL receives.

Model definitions
package models

import (
	"time"

	"github.com/uptrace/bun"
)

type Customer struct {
	bun.BaseModel `bun:"table:customers,alias:c"`

	ID    int64  `bun:"id,pk,autoincrement"`
	Name  string `bun:"name,notnull"`
	Email string `bun:"email,notnull,unique"`
	Tier  string `bun:"tier,notnull,default:'free'"`

	Orders []*Order `bun:"rel:has-many,join:id=customer_id"`
}

type Order struct {
	bun.BaseModel `bun:"table:orders,alias:o"`

	ID         int64     `bun:"id,pk,autoincrement"`
	CustomerID int64     `bun:"customer_id,notnull"`
	Total      float64   `bun:"total,notnull,type:numeric(10,2)"`
	Status     string    `bun:"status,notnull,default:'pending'"`
	CreatedAt  time.Time `bun:"created_at,nullzero,notnull,default:current_timestamp"`

	Customer *Customer   `bun:"rel:belongs-to,join:customer_id=id"`
	Items    []*OrderItem `bun:"rel:has-many,join:id=order_id"`
}

type OrderItem struct {
	bun.BaseModel `bun:"table:order_items,alias:oi"`

	ID          int64   `bun:"id,pk,autoincrement"`
	OrderID     int64   `bun:"order_id,notnull"`
	ProductName string  `bun:"product_name,notnull"`
	Quantity    int     `bun:"quantity,notnull"`
	UnitPrice   float64 `bun:"unit_price,notnull,type:numeric(10,2)"`

	Order *Order `bun:"rel:belongs-to,join:order_id=id"`
}

A few details worth noting. The alias tag on BaseModel sets the table alias used in generated SQL — this makes JOINs and subqueries readable when you encounter them in EXPLAIN ANALYZE output. Without a short alias, you get customers__customer_id in your query plans, which is the sort of thing that causes a DBA to squint and then close the terminal. The type:numeric(10,2) annotation maps directly to the PostgreSQL column type rather than letting Bun guess from float64 — which, left to its own devices, would become double precision, a floating-point type that will silently round your financial calculations. And the relation tags (rel:has-many, rel:belongs-to) declare the relationship direction explicitly rather than inferring it.

GORM infers much of this from naming conventions. CustomerID becomes customer_id, and the presence of a Customer field implies a belongs-to relationship. This works well until it does not — until the convention does not match your schema, and you spend an afternoon debugging silent misassociations. I have witnessed a production incident caused by a GORM model where UserID was intended to reference a users table but the field was named User in a struct that also had a users table with a different schema. GORM inferred the relationship silently and incorrectly. The queries succeeded. The data was wrong.

Bun asks you to be explicit. It takes slightly more keystrokes. It produces significantly fewer surprises.

What happens at model registration: reflection once, not per query

Before discussing query patterns, I should explain a mechanical difference that accounts for most of Bun's performance advantage over GORM. It concerns when and how each ORM inspects your Go structs.

Model registration and internal caching
// Bun's internal model cache — what happens at init
func newDB() *bun.DB {
	db := bun.NewDB(sqldb, pgdialect.New())

	// Register models so Bun inspects struct tags once
	db.RegisterModel(
		(*Customer)(nil),
		(*Order)(nil),
		(*OrderItem)(nil),
	)

	// After this call, Bun has cached:
	//   - Table name + alias for each model
	//   - Column names, types, defaults
	//   - Relation graph (has-many, belongs-to, many-to-many)
	//   - Scan functions for each field (no per-query reflect)
	//
	// GORM does this lazily on first use, then caches.
	// The difference: Bun's cache is a flat lookup table.
	// GORM's cache carries the full reflect.Type metadata.

	return db
}

When you call db.RegisterModel(), Bun walks your struct fields using Go's reflect package, parses the bun tags, and builds a Table struct that contains everything it needs: column names, Go types, scan functions, relation definitions, and SQL fragments for INSERT/UPDATE/SELECT. This happens once. Every subsequent query on that model type looks up the cached Table and uses it directly.

GORM also caches schema information, but its cache retains the full reflect.Type metadata and rebuilds certain structures per query — notably the callback chain and the Statement object. Each query allocates a new Statement from a sync.Pool, populates it via reflection, and passes it through the callback pipeline. The overhead per query is modest — perhaps 20-40 microseconds — but at thousands of queries per second, those microseconds aggregate into milliseconds, and those milliseconds aggregate into latency your users can feel.

This is not a criticism of GORM's design. The callback system is what makes GORM extensible — hooks, scopes, plugins, all flow through it. Bun chose a different trade-off: less extensibility, less overhead. If you need before/after hooks on every create operation across your entire application, GORM's architecture serves you well. If you need minimal overhead on every query, Bun's architecture serves you better. The choice is not which is better in the abstract, but which trade-off matters for your workload.

The N+1 problem: Relation() vs GORM's Preload

Every ORM performance guide must begin with the N+1 problem, and I shall not break with tradition. The pattern is universal: fetch N rows, then issue a separate query for each row's related data. The result is N+1 queries where 2 would suffice. Go's type system, for all its virtues, offers no protection here. The code compiles. The linter passes. The database weeps.

The N+1 pattern in Go — no type safety can prevent this
// The classic N+1 — Go's static typing does not protect you here
func getPendingOrderSummaries(ctx context.Context, db *bun.DB) ([]OrderSummary, error) {
	var orders []Order
	err := db.NewSelect().
		Model(&orders).
		Where("status = ?", "pending").
		Scan(ctx)
	if err != nil {
		return nil, err
	}

	summaries := make([]OrderSummary, 0, len(orders))
	for _, o := range orders {
		// Each iteration fires a SELECT against customers
		var customer Customer
		err := db.NewSelect().
			Model(&customer).
			Where("id = ?", o.CustomerID).
			Scan(ctx)
		if err != nil {
			return nil, err
		}
		summaries = append(summaries, OrderSummary{
			OrderID:      o.ID,
			Total:        o.Total,
			CustomerName: customer.Name,
		})
	}
	return summaries, nil
}
// 200 pending orders = 201 queries. 201 round trips.
// The goroutine scheduler does not make this faster.

Go's compile-time type checking catches many categories of error. This is not one of them. The code above compiles cleanly, passes all linters, and executes 201 sequential queries against PostgreSQL. Each query is individually fast — perhaps 0.06ms of server time for a primary key lookup. The aggregate server time is approximately 12ms. Manageable.

But the server time is not the problem. The network time is. Each query requires a round trip — send the query, wait for the response. On a local connection, that is approximately 0.3ms per round trip. Over a network (application in one availability zone, database in another), it is 1-3ms. Two hundred round trips at 1ms each is 200ms of pure network waiting. Your goroutines are parked on the scheduler, your connection pool slots are occupied, and your p99 latency has just tripled — all because of a loop that looked innocuous.

Fixed with Relation() — 2 queries for any row count
// Fixed with Relation() — 2 queries instead of 201
func getPendingOrderSummaries(ctx context.Context, db *bun.DB) ([]OrderSummary, error) {
	var orders []Order
	err := db.NewSelect().
		Model(&orders).
		Relation("Customer").  // Bun issues a second SELECT with IN(...)
		Where("status = ?", "pending").
		Scan(ctx)
	if err != nil {
		return nil, err
	}

	summaries := make([]OrderSummary, 0, len(orders))
	for _, o := range orders {
		// Customer is already loaded — no additional query
		summaries = append(summaries, OrderSummary{
			OrderID:      o.ID,
			Total:        o.Total,
			CustomerName: o.Customer.Name,
		})
	}
	return summaries, nil
}

// Deep relations work the same way:
err := db.NewSelect().
	Model(&orders).
	Relation("Customer").
	Relation("Items").
	Where("status = ?", "pending").
	Scan(ctx)
// 3 queries total: orders, customers (IN), items (IN).
// Row count is irrelevant. Query count is fixed.

Bun's Relation() issues a second query with an IN clause containing the parent IDs. Two queries. Two round trips. Whether you have 50 orders or 5,000, the query count stays at 2 (or 3, if you include Items). The cost is proportional to the data, not to the number of relations.

Let me show you what PostgreSQL actually does with these two approaches, because the EXPLAIN ANALYZE output makes the difference visceral.

EXPLAIN ANALYZE — 201 queries vs 2
-- What the N+1 pattern actually costs PostgreSQL
-- Query 1: fetch pending orders
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending';
--                              QUERY PLAN
-- ---------------------------------------------------------------
-- Index Scan using idx_orders_status on orders o
--   Index Cond: (status = 'pending')
--   Rows Removed by Filter: 0
--   Actual Rows: 200
--   Planning Time: 0.08 ms
--   Execution Time: 0.31 ms

-- Queries 2-201: one per order, 200 times
EXPLAIN ANALYZE SELECT * FROM customers WHERE id = 4217;
--                              QUERY PLAN
-- ---------------------------------------------------------------
-- Index Scan using customers_pkey on customers c
--   Index Cond: (id = 4217)
--   Actual Rows: 1
--   Planning Time: 0.04 ms
--   Execution Time: 0.02 ms

-- Each PK lookup: ~0.06 ms (plan + execute)
-- 200 PK lookups: ~12 ms server-side
-- But: 200 network round trips at ~0.3 ms each = ~60 ms
-- Total: ~72 ms. The database is fast. The network is slow.

-- Compare with Bun's Relation() approach:
EXPLAIN ANALYZE SELECT * FROM customers
  WHERE id IN (4217, 4218, 4219, /* ... 200 IDs ... */);
--                              QUERY PLAN
-- ---------------------------------------------------------------
-- Index Scan using customers_pkey on customers c
--   Index Cond: (id = ANY ('{4217,4218,...}'::bigint[]))
--   Actual Rows: 187
--   Planning Time: 0.12 ms
--   Execution Time: 0.28 ms

-- 2 queries, 2 round trips: ~1.2 ms total.
-- The same data. 60x faster.

The numbers do not require commentary. Two queries, two round trips, 1.2ms total. Versus 201 queries, 201 round trips, 72ms on localhost, 200ms+ over a network. The database performance is nearly identical in both cases — primary key lookups are fast regardless. It is the protocol overhead that kills you. Every query, no matter how small, requires a parse, a bind, an execute, and a result transfer. Bun's Relation() eliminates 199 of those cycles.

This is mechanically similar to GORM's Preload(). Both generate an IN-based second query. The difference emerges when you need to filter or limit the related rows — which, in production, you almost always do.

Filtered relations: where Bun pulls ahead of GORM

Here is the distinction that matters at production scale. GORM's Preload loads all related rows into memory, then filters them in Go. Bun's Relation accepts a callback that modifies the generated SQL — filters, sorts, and limits execute on PostgreSQL, not in your application process.

Filtered relations — SQL-side, not application-side
// Relation() with a filter — load only what you need
var customers []Customer
err := db.NewSelect().
	Model(&customers).
	Relation("Orders", func(sq *bun.SelectQuery) *bun.SelectQuery {
		return sq.
			Where("status = ?", "shipped").
			OrderExpr("created_at DESC").
			Limit(5)
	}).
	Where("tier = ?", "enterprise").
	Scan(ctx)

// Each customer gets at most 5 shipped orders, sorted by date.
// Bun generates:
//   SELECT ... FROM customers WHERE tier = 'enterprise'
//   SELECT ... FROM orders
//     WHERE customer_id IN (...)
//     AND status = 'shipped'
//     ORDER BY created_at DESC LIMIT 5
//
// Compare with GORM's Preload, which loads *all* related rows
// and filters in Go memory. That distinction matters at scale.

Consider the practical impact. An enterprise customer with 12,000 historical orders. GORM's Preload("Orders") fetches all 12,000 rows, allocates 12,000 structs (each with 5-10 fields, each field requiring a reflect.Value allocation during scanning), transfers approximately 1.2MB of data over the network, and then your code picks out the 5 you wanted. Bun's filtered Relation tells PostgreSQL to return 5 rows. The network transfer differs by three orders of magnitude. The memory allocation differs by three orders of magnitude. The only thing that does not differ is the result — 5 orders, sorted by date.

GORM Preload vs Bun Relation — side by side
// GORM: Preload loads everything, filter in Go
var customers []Customer
db.Preload("Orders").Where("tier = ?", "enterprise").Find(&customers)
// Generated SQL:
//   SELECT * FROM customers WHERE tier = 'enterprise'
//   SELECT * FROM orders WHERE customer_id IN (...)
// Returns ALL orders for each customer.
// 100 enterprise customers x 500 orders each = 50,000 rows loaded.

// GORM does support simple where in Preload:
db.Preload("Orders", "status = ?", "shipped").Find(&customers)
// This filters on the database side — but no ORDER BY, no LIMIT.
// For the 5 most recent shipped orders? You are on your own.

// Bun: filter, sort, and limit on the database
db.NewSelect().
	Model(&customers).
	Relation("Orders", func(sq *bun.SelectQuery) *bun.SelectQuery {
		return sq.
			Where("status = ?", "shipped").
			OrderExpr("created_at DESC").
			Limit(5)
	}).
	Where("tier = ?", "enterprise").
	Scan(ctx)
// Returns exactly 5 orders per customer. PostgreSQL does the work.

GORM does support Preload("Orders", "status = ?", "shipped") for simple where clauses, and I should be fair about this — for many applications, simple where filtering is sufficient. The distinction matters when you need ORDER BY, LIMIT, or complex conditions in the preload itself. GORM does not support these in the Preload callback. For those, you fall back to manual queries or subqueries — which means you have left the ORM's relation-loading system entirely and are now maintaining two different query patterns for related data.

Bun keeps you inside the query builder for all of this. One API, consistent behavior, SQL you can read. For a deeper treatment of the GORM relation-loading question specifically, I have written a detailed comparison of Preload vs Joins on PostgreSQL.

An honest counterpoint on Relation() limits

I should note that Bun's Relation() with Limit(5) applies the limit to the entire related set, not per parent. If you load 100 customers with Relation("Orders", ... Limit(5)), you get 5 orders total across all customers, not 5 per customer. This is a consequence of how the IN-based second query works — SELECT * FROM orders WHERE customer_id IN (...) ORDER BY created_at DESC LIMIT 5 returns the 5 most recent orders across all matching customers.

For per-parent limits, you need a lateral join or a window function — which means raw SQL or a CTE with ROW_NUMBER(). Bun does not pretend to handle this automatically, and I respect the honesty. GORM does not handle it either, though it also does not surface this limitation clearly in its documentation. Neither ORM solves per-parent limiting out of the box. If this is your requirement, write the SQL. It is not difficult, and Bun's raw SQL scanning makes it painless.

CTEs with .With(): the query builder feature GORM does not have

Common Table Expressions are PostgreSQL's Swiss Army knife for complex queries. They let you name subqueries, reference them multiple times, and build readable multi-step computations. They are also entirely absent from GORM's query builder. This is not a minor omission — CTEs are how you write reporting queries, materialization logic, and recursive traversals without resorting to multiple separate queries stitched together in application code.

CTEs built with the query builder — not raw SQL
// CTEs via .With() — Bun's quiet superpower
func getHighValueCustomerReport(ctx context.Context, db *bun.DB) ([]CustomerReport, error) {
	var reports []CustomerReport

	err := db.NewSelect().
		With("customer_totals", db.NewSelect().
			TableExpr("orders").
			ColumnExpr("customer_id").
			ColumnExpr("SUM(total) AS lifetime_value").
			ColumnExpr("COUNT(*) AS order_count").
			ColumnExpr("AVG(total) AS avg_order").
			Where("status != ?", "cancelled").
			GroupExpr("customer_id"),
		).
		TableExpr("customer_totals AS ct").
		Join("JOIN customers AS c ON c.id = ct.customer_id").
		ColumnExpr("c.name").
		ColumnExpr("c.email").
		ColumnExpr("c.tier").
		ColumnExpr("ct.lifetime_value").
		ColumnExpr("ct.order_count").
		ColumnExpr("ct.avg_order").
		Where("ct.lifetime_value > ?", 10000).
		OrderExpr("ct.lifetime_value DESC").
		Scan(ctx, &reports)

	return reports, err
}

// Generated SQL:
// WITH customer_totals AS (
//   SELECT customer_id, SUM(total) AS lifetime_value,
//          COUNT(*) AS order_count, AVG(total) AS avg_order
//   FROM orders WHERE status != 'cancelled'
//   GROUP BY customer_id
// )
// SELECT c.name, c.email, c.tier, ct.lifetime_value,
//        ct.order_count, ct.avg_order
// FROM customer_totals AS ct
// JOIN customers AS c ON c.id = ct.customer_id
// WHERE ct.lifetime_value > 10000
// ORDER BY ct.lifetime_value DESC

The generated SQL is clean. The Go code that produces it reads top-to-bottom and corresponds structurally to the SQL it generates. You could hand this query to a DBA, and they would nod. You could paste the generated SQL into pgAdmin and it would run unmodified. This is what "SQL-first" means in practice — the abstraction preserves the shape of the SQL rather than transforming it into something the database must reverse-engineer.

Let me show you what PostgreSQL does with this CTE, because understanding the execution plan is essential to knowing whether a CTE is helping or hurting.

EXPLAIN ANALYZE on the CTE query
-- EXPLAIN ANALYZE on the generated CTE
EXPLAIN ANALYZE
WITH customer_totals AS (
  SELECT customer_id, SUM(total) AS lifetime_value,
         COUNT(*) AS order_count, AVG(total) AS avg_order
  FROM orders WHERE status != 'cancelled'
  GROUP BY customer_id
)
SELECT c.name, c.email, c.tier, ct.lifetime_value,
       ct.order_count, ct.avg_order
FROM customer_totals AS ct
JOIN customers AS c ON c.id = ct.customer_id
WHERE ct.lifetime_value > 10000
ORDER BY ct.lifetime_value DESC;

--                              QUERY PLAN
-- ---------------------------------------------------------------
-- Sort (actual time=48.21..48.24 rows=142 loops=1)
--   Sort Key: ct.lifetime_value DESC
--   Sort Method: quicksort  Memory: 38kB
--   -> Hash Join (actual time=47.80..48.12 rows=142 loops=1)
--         Hash Cond: (ct.customer_id = c.id)
--         -> CTE Scan on customer_totals ct (actual time=47.41..47.68 rows=142 loops=1)
--               Filter: (lifetime_value > 10000)
--               Rows Removed by Filter: 9858
--               -> HashAggregate (actual time=42.15..46.90 rows=10000 loops=1)
--                     Group Key: orders.customer_id
--                     Batches: 1  Memory Usage: 1553kB
--                     -> Seq Scan on orders (actual time=0.02..18.41 rows=487000 loops=1)
--                           Filter: (status <> 'cancelled')
--                           Rows Removed by Filter: 13000
--         -> Hash (actual time=0.31..0.31 rows=10000 loops=1)
--               -> Seq Scan on customers c (actual time=0.01..0.15 rows=10000 loops=1)
-- Planning Time: 0.28 ms
-- Execution Time: 48.39 ms

-- 48ms to aggregate 487K orders, filter to 142 high-value
-- customers, join their names, and sort. One query.
-- The alternative: fetch all orders into Go, group in a map,
-- sum in a loop, filter, sort, then query customers.
-- That approach takes ~800ms and allocates ~40MB.

48ms to aggregate 487,000 orders, group by customer, filter to high-value accounts, join names, and sort. One query. One round trip. PostgreSQL 12 and later will inline CTEs into the main query when they are referenced only once — which means the CTE syntax is a readability convenience with zero performance cost. Earlier PostgreSQL versions materialized every CTE into a temporary result set, which could be slower for simple cases. If you are on PostgreSQL 11 or earlier, be aware of this distinction. If you are on PostgreSQL 12 or later, use CTEs freely.

With GORM, producing this query requires raw SQL via db.Raw(). That is a perfectly valid approach — raw SQL is not a failure — but it means you lose GORM's scanning, its hooks, its logging integration. You are writing two different kinds of database code in the same application: GORM for simple queries, raw SQL for complex ones. Bun's query builder covers both without switching modes.

Recursive CTEs: hierarchical data without Go-side loops

Recursive CTEs work the same way, and they deserve their own attention because hierarchical data (org charts, category trees, comment threads, bill-of-materials structures) appears in approximately one out of every five production Go applications I encounter.

Recursive CTE for a category tree
// Recursive CTE — category tree with depth tracking
func getCategoryTree(ctx context.Context, db *bun.DB, rootID int64) ([]CategoryNode, error) {
	var nodes []CategoryNode

	err := db.NewSelect().
		With("RECURSIVE category_tree", db.NewSelect().
			TableExpr("categories").
			ColumnExpr("id, name, parent_id, 0 AS depth").
			Where("id = ?", rootID).
			UnionAll(
				db.NewSelect().
					TableExpr("categories AS c").
					ColumnExpr("c.id, c.name, c.parent_id, ct.depth + 1").
					Join("JOIN category_tree ct ON ct.id = c.parent_id"),
			),
		).
		TableExpr("category_tree").
		OrderExpr("depth, name").
		Scan(ctx, &nodes)

	return nodes, err
}

// Generated SQL:
// WITH RECURSIVE category_tree AS (
//   SELECT id, name, parent_id, 0 AS depth
//   FROM categories WHERE id = 1
//   UNION ALL
//   SELECT c.id, c.name, c.parent_id, ct.depth + 1
//   FROM categories AS c
//   JOIN category_tree ct ON ct.id = c.parent_id
// )
// SELECT * FROM category_tree ORDER BY depth, name
//
// One query. Arbitrary depth. No Go-side loop.
// Try building this with GORM's query builder.

One query. Arbitrary depth. No Go-side loop that issues a query per level. No pre-fetching the entire table and building a tree in memory. PostgreSQL's recursive CTE engine handles the traversal internally, and Bun's .With("RECURSIVE ...") syntax lets you build it without dropping to raw SQL.

GORM has no recursive CTE support in its query builder. The workaround is either db.Raw() (losing ORM features) or fetching all rows and building the tree in Go (losing database efficiency). Neither is satisfactory for a tree with 10,000 nodes and 15 levels of depth — the Go-side approach allocates a map of slices, walks the map, builds parent-child links, and handles orphan nodes. It is 40-60 lines of code that PostgreSQL can replace with a 10-line recursive CTE that executes in under 5ms.

Subqueries: composable queries without string concatenation

Before moving to bulk operations, I should mention a feature that sits between simple queries and CTEs in complexity: subqueries. Bun treats a *bun.SelectQuery as a composable value that can be embedded in WHERE clauses, FROM clauses, and JOIN conditions.

Subquery composition
// Subqueries — composing queries without raw SQL
func getCustomersWithRecentLargeOrders(ctx context.Context, db *bun.DB) ([]Customer, error) {
	var customers []Customer

	subq := db.NewSelect().
		Model((*Order)(nil)).
		ColumnExpr("DISTINCT customer_id").
		Where("total > ?", 500).
		Where("created_at > NOW() - INTERVAL '30 days'")

	err := db.NewSelect().
		Model(&customers).
		Where("id IN (?)", subq).
		OrderExpr("name").
		Scan(ctx)

	return customers, err
}

// Generated SQL:
// SELECT c.id, c.name, c.email, c.tier
// FROM customers AS c
// WHERE id IN (
//   SELECT DISTINCT customer_id FROM orders AS o
//   WHERE total > 500 AND created_at > NOW() - INTERVAL '30 days'
// )
// ORDER BY name
//
// The subquery is a first-class *bun.SelectQuery.
// It composes into WHERE, FROM, JOIN — anywhere SQL allows.

The subquery is not a string. It is a Go value with type checking, parameter binding, and the same builder interface as any other query. You can test it independently, log it, and reuse it. This is the query builder equivalent of function composition — small queries that combine into larger queries without string manipulation or SQL injection risk.

GORM supports subqueries through db.Where("id IN (?)", db.Table("orders").Select("customer_id").Where(...)), which is functionally similar but syntactically more awkward. The difference is stylistic rather than functional — both produce valid, parameterized SQL. I mention it because subqueries are where Bun's SQL-first design shows its strength most clearly: the Go code reads like the SQL it produces.

Bulk operations: INSERT, upsert, and batch UPDATE

The difference between inserting 1,000 rows one at a time and inserting them in a single statement is the difference between a loading spinner and an imperceptible pause. This is not hyperbole — the numbers are severe, and I shall provide them.

Bulk operations — one query, not one thousand
// Individual inserts — one round trip per row
for _, item := range items {
	_, err := db.NewInsert().Model(&item).Exec(ctx)
	if err != nil {
		return err
	}
}
// 1,000 items = 1,000 INSERTs = 1,000 round trips = ~3-5 seconds

// Bulk insert — one multi-row INSERT
_, err := db.NewInsert().Model(&items).Exec(ctx)
// 1,000 items = 1 INSERT with 1,000 value tuples = ~35ms

// Bulk insert with ON CONFLICT (upsert)
_, err = db.NewInsert().
	Model(&items).
	On("CONFLICT (order_id, product_name) DO UPDATE").
	Set("quantity = EXCLUDED.quantity").
	Set("unit_price = EXCLUDED.unit_price").
	Exec(ctx)
// Atomic upsert — no separate existence check needed.

// Bulk update with a CTE and VALUES
_, err = db.NewUpdate().
	With("_data", db.NewValues(&updates)).
	Model((*OrderItem)(nil)).
	TableExpr("_data").
	Set("quantity = _data.quantity").
	Set("unit_price = _data.unit_price").
	Where("order_item.id = _data.id").
	Exec(ctx)
// One query. No CASE/WHEN gymnastics.

Bun generates a single INSERT INTO ... VALUES (...), (...), (...) statement. One network round trip. PostgreSQL processes the batch in a single transaction, using a single WAL flush for the entire set. The ON CONFLICT clause handles upserts atomically — no need for a separate existence check, no race conditions between concurrent writers, no SELECT ... FOR UPDATE followed by INSERT or UPDATE.

EXPLAIN ANALYZE on a 1,000-row upsert
-- What Bun's bulk INSERT actually sends to PostgreSQL
INSERT INTO order_items (order_id, product_name, quantity, unit_price)
VALUES
  (101, 'Widget A', 3, 12.50),
  (101, 'Widget B', 1, 24.99),
  (102, 'Gadget C', 5, 8.75),
  -- ... 997 more rows ...
  (589, 'Sprocket Z', 2, 15.00)
ON CONFLICT (order_id, product_name) DO UPDATE
SET quantity = EXCLUDED.quantity,
    unit_price = EXCLUDED.unit_price;

-- EXPLAIN ANALYZE on a 1,000-row upsert:
--                              QUERY PLAN
-- ---------------------------------------------------------------
-- Insert on order_items (actual time=31.42..31.42 rows=0 loops=1)
--   Conflict Resolution: UPDATE
--   Conflict Arbiter Indexes: idx_order_items_order_product
--   Tuples Inserted: 847
--   Conflicting Tuples: 153
--   -> Values Scan on "*VALUES*" (actual time=0.01..2.18 rows=1000 loops=1)
-- Planning Time: 1.82 ms
-- Execution Time: 33.24 ms

-- 33ms for 1,000 rows: 847 inserts + 153 updates.
-- Compare with 1,000 individual round trips: ~3,200 ms.
-- That is a 97% reduction. Not percent improvement — percent reduction.

33ms for 1,000 rows. That is 33 microseconds per row, including conflict detection, index maintenance, and WAL logging. Compare with 1,000 individual INSERT statements: each one requires a parse, a plan, an execute, a WAL flush, and a round trip. At 3.2ms per individual insert (network included), the total is 3,200ms. A 97% reduction is not an optimization technique — it is the difference between a correct implementation and an incorrect one. Inserting rows in a loop is a bug that happens to produce correct data.

Bulk insert with RETURNING
// Bulk insert with RETURNING — get generated IDs back
var items []OrderItem
// ... populate items ...

_, err := db.NewInsert().
	Model(&items).
	Returning("id").
	Exec(ctx)
// After Exec, each item in the slice has its ID field populated.
// One round trip. No second query to fetch IDs.

// Combine with ON CONFLICT for upsert + return:
_, err = db.NewInsert().
	Model(&items).
	On("CONFLICT (order_id, product_name) DO UPDATE").
	Set("quantity = EXCLUDED.quantity").
	Returning("id, quantity").
	Exec(ctx)
// Updated rows return their new values. Inserted rows return
// their generated IDs. All in one statement.

The RETURNING clause is worth special attention. After a bulk insert, you often need the generated IDs — for foreign key associations, for response payloads, for logging. Without RETURNING, you would need a second query: SELECT id FROM order_items WHERE order_id IN (...) ORDER BY .... With RETURNING, PostgreSQL sends the generated values back in the same response. One round trip. Bun scans the returned values directly into the slice you passed, which means after Exec, each struct in your slice has its ID field populated.

GORM's Create(&items) does support batch inserts as of v2, though its default CreateBatchSize is 100 and it builds the SQL through reflection on each field. For 1,000-row inserts, the difference is measurable: approximately 35ms (Bun) versus 52ms (GORM) on the same hardware. For 10,000-row inserts, the gap widens to approximately 310ms (Bun) versus 520ms (GORM), because GORM's per-field reflection cost scales linearly with row count.

The bulk update pattern deserves special attention. Bun's approach — using a CTE with VALUES and a FROM clause — produces a single UPDATE statement that modifies all rows in one pass. GORM's batch update generates individual UPDATE statements or uses CASE/WHEN syntax that becomes unwieldy past a few hundred rows. For a deeper look at Go bulk insert performance specifically, including pgx's CopyFrom protocol, I have benchmarked the options in detail.

Benchmarks: Bun vs GORM vs raw pgx

Numbers first. Arguments after. These benchmarks use PostgreSQL 16 on the same machine, same schema, same data. Go 1.22, Bun 1.2.x, GORM 2.x, pgx 5.x. Connection pooling set to 20 connections for all three. Results are median of 10,000 iterations after a 1,000-iteration warmup to ensure stable connection pools and warm PostgreSQL shared buffers.

OperationBunGORMRaw pgxNote
Single row SELECT by PK~42 µs~89 µs~38 µsBun: <5% over raw pgx
SELECT 100 rows, scan to structs~280 µs~620 µs~260 µsGORM reflects per-field
SELECT 1,000 rows~2.1 ms~5.8 ms~1.9 msGap widens with row count
SELECT 10,000 rows~18 ms~54 ms~16 ms3x gap at scale
INSERT 1 row~55 µs~110 µs~48 µsGORM builds SQL via reflection
Bulk INSERT 1,000 rows~34 ms~52 ms~30 msGORM uses individual INSERTs by default
Bulk INSERT 10,000 rows~310 ms~520 ms~280 msBun batches automatically
UPDATE 1 row by PK~50 µs~98 µs~44 µsGORM updates all columns by default
Relation load (100 parents + children)~0.6 ms~1.4 msN/A (manual)Bun: IN query; GORM: Preload
CTE queryNative .With()Raw SQL onlyRaw SQLBun builds CTEs in the query builder

The pattern is consistent. Bun sits within 5-10% of raw pgx for read operations, and within 15% for writes. GORM is typically 2-2.5x slower than pgx — not because of bad SQL, but because of runtime reflection, per-field callbacks, and intermediate allocation. At 10,000 rows, the gap is 3x. These are not pathological benchmarks designed to make GORM look bad. They are the workloads you encounter in production: read a list of orders, insert a batch of line items, load a customer with their recent activity.

Where does GORM's overhead come from?

Three places, each measurable independently:

  • Reflection per query. GORM uses reflect to inspect struct fields every time it builds SQL. Bun does this once during model registration and caches the result. The per-query reflection cost is approximately 15-25 microseconds for a 10-field struct.
  • Callback chain. GORM's hook system (BeforeCreate, AfterCreate, etc.) runs for every operation, even when no hooks are registered. Each callback checks whether a handler exists, allocates a scope, and proceeds through the chain. The overhead is approximately 5-10 microseconds per query.
  • Intermediate allocations. GORM builds SQL via strings.Builder and allocates interface slices for arguments. Bun uses a pre-allocated byte buffer and typed argument lists. The allocation difference is dramatic at scale:
OperationBun allocsGORM allocsWhy
SELECT 1 row624GORM: reflect.Type lookups, callback slices, Statement pool
SELECT 100 rows108412GORM: per-row reflect.Value creation for each field
SELECT 1,000 rows1,0084,012Linear scaling — Bun: 1 + N, GORM: 4 + 4N
INSERT 1 row831GORM: field iteration, callback chain, clause builders
Bulk INSERT 100 rows12218Bun: one buffer. GORM: per-row clause assembly

At 1,000 rows, Bun makes 1,008 allocations versus GORM's 4,012. Each allocation is a GC root. Fewer allocations mean less GC pressure, which means more consistent latency at the tail — your p99 improves not because the average is faster, but because the garbage collector pauses less often.

None of these are bugs. They are trade-offs that GORM made in favor of developer experience and extensibility. But if your application processes thousands of queries per second, the aggregate overhead is real. For a broader treatment of the ORM vs raw SQL performance question, I have written about this elsewhere.

An honest note on benchmark methodology

I should be forthcoming about the limits of these numbers. Benchmarks on a single machine with a local PostgreSQL instance minimize network latency, which means they emphasize CPU-side overhead (where Bun wins) and underemphasize I/O-side costs (where all three are equivalent). In a production deployment with network-separated database and application, the absolute difference between Bun and GORM shrinks because network latency dominates. The relative difference remains — Bun is still faster — but the practical impact depends on whether your bottleneck is CPU (query building, struct scanning) or I/O (network round trips, disk access).

If your PostgreSQL queries consistently take 50ms because of missing indexes or sequential scans on large tables, the 40-microsecond difference between Bun and GORM is irrelevant. Fix the queries first. Then optimize the ORM layer. The order matters.

Column selection and aggregation: fetch less, compute there

The fastest bytes are the ones that never cross the network. Selecting only the columns you need — rather than SELECT * — reduces both PostgreSQL's work and the data transferred to your application. This is not a Bun-specific technique; it works with any ORM or raw SQL. But Bun makes it ergonomic, and I find that ergonomics determine what developers actually do in practice.

Column selection and aggregation
// Full model scan — every column, every row
var orders []Order
err := db.NewSelect().Model(&orders).Scan(ctx)
// SELECT id, customer_id, total, status, created_at FROM orders

// Column selection — only what you need
var results []struct {
	ID    int64   `bun:"id"`
	Total float64 `bun:"total"`
}
err = db.NewSelect().
	Model((*Order)(nil)).
	Column("id", "total").
	Where("status = ?", "pending").
	Scan(ctx, &results)
// SELECT id, total FROM orders WHERE status = 'pending'
// Less data over the wire. Less memory. Faster scans.

// ColumnExpr for computed values
var summaries []struct {
	Status     string  `bun:"status"`
	OrderCount int     `bun:"order_count"`
	Revenue    float64 `bun:"revenue"`
}
err = db.NewSelect().
	Model((*Order)(nil)).
	ColumnExpr("status").
	ColumnExpr("COUNT(*) AS order_count").
	ColumnExpr("SUM(total) AS revenue").
	GroupExpr("status").
	Scan(ctx, &summaries)
// Aggregation stays in PostgreSQL. Go gets the answer.

Bun scans results into anonymous structs, named structs, or raw maps — whatever matches the shape of your query. You are not forced to define a model for every query variation. A struct with two fields and two bun tags is sufficient. This flexibility is important because the alternative — always scanning into the full model struct — means transferring columns you never read, allocating memory for fields you never access, and preventing PostgreSQL from using index-only scans.

EXPLAIN ANALYZE — SELECT * vs column selection
-- The cost of SELECT * on a wide table
-- Table: orders (5 columns, 500K rows, ~45MB)
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending';
--                              QUERY PLAN
-- ---------------------------------------------------------------
-- Index Scan using idx_orders_status on orders
--   Index Cond: (status = 'pending')
--   Actual Rows: 12,400
--   Buffers: shared hit=11842
--   Planning Time: 0.06 ms
--   Execution Time: 14.21 ms

-- Now with column selection:
EXPLAIN ANALYZE SELECT id, total FROM orders WHERE status = 'pending';
--                              QUERY PLAN
-- ---------------------------------------------------------------
-- Index Scan using idx_orders_status on orders
--   Index Cond: (status = 'pending')
--   Actual Rows: 12,400
--   Buffers: shared hit=11842
--   Planning Time: 0.05 ms
--   Execution Time: 8.72 ms

-- Same index scan. Same rows. But 39% less execution time.
-- The difference: PostgreSQL reads fewer heap pages when it
-- can satisfy the query from the index alone (index-only scan)
-- or transfers less data to the client.
-- On a table with TEXT columns or wide JSONB, the gap is larger.

The EXPLAIN ANALYZE output tells the story. Same index scan. Same row count. But 39% less execution time with column selection, because PostgreSQL transfers less data to the client buffer and potentially satisfies the query from the index alone without touching the heap. On a table with TEXT columns, JSONB payloads, or wide rows, the gap is larger — I have measured 60-70% reductions on tables with large JSONB columns that were included in SELECT * but never used by the application.

For aggregation, ColumnExpr lets you embed arbitrary SQL expressions that PostgreSQL evaluates on the server. The result arrives pre-computed. Your Go code receives the answer, not the raw data to compute the answer from. For a table with 500,000 orders, the difference between fetching 500,000 rows and computing SUM/COUNT in Go versus receiving 5 grouped rows from PostgreSQL is the difference between 800ms and 2ms. Between 40MB of network transfer and 200 bytes. Between a GC-pressured application and one that barely notices the query happened.

GORM's UPDATE behavior: a quiet surprise worth knowing

I hesitate to devote an entire section to a single GORM behavior, but this one causes enough production incidents that I feel obligated. It concerns what happens when you update a record.

GORM's Save() updates every column
// GORM's default UPDATE behavior — a quiet surprise
type Order struct {
	gorm.Model
	CustomerID uint
	Total      float64
	Status     string
	Note       string
}

// You want to update just the status:
db.Save(&order) // after setting order.Status = "shipped"

// What GORM sends:
// UPDATE orders SET customer_id=4217, total=129.99,
//   status='shipped', note='', updated_at='2026-03-05...'
//   WHERE id = 42
//
// Every column. Even ones you did not change.
// If another process updated "note" between your read and
// write, you just overwrote their change with an empty string.
//
// The fix: use db.Model(&order).Update("status", "shipped")
// or db.Model(&order).Select("Status").Updates(&order)
//
// Bun's default: update only columns that were set.
// db.NewUpdate().Model(&order).Column("status").Where(...).Exec(ctx)
// Explicit. No surprises. No silent data loss.

GORM's Save() updates every column in the struct, including zero-value fields. If you read an order, set Status to "shipped," and call Save(), GORM sends an UPDATE that sets every column — including Note, which is now an empty string because you never loaded it (or because Go zero-initialized it). If another process updated Note between your read and your save, you have just silently overwritten their change.

GORM provides Update() and Select().Updates() to avoid this. They work. But Save() is the method name that every tutorial teaches, and its behavior is not obvious from its name. It does not "save the changes you made." It "sets every column to the current struct values." The distinction is subtle. The consequences are not.

Bun's approach is explicit: .Column("status") specifies which columns to update. There is no Save() method that silently includes every field. The method chain says what it does. If you want to update three columns, you list three columns. Verbosity in the service of correctness is not a flaw — it is a feature.

"The database was not slow. It was being asked poorly."

— from You Don't Need Redis, Chapter 3: The ORM Tax

Transactions: the RunInTx convenience

Transactions in Bun use the standard database/sql Begin/Commit/Rollback pattern. There is nothing surprising here — and that is the point. If you have written transactions with database/sql before, Bun's transactions work identically, because they are database/sql transactions.

Transactions with explicit control and RunInTx
// Transactions — Bun wraps database/sql's Begin/Commit
func transferFunds(ctx context.Context, db *bun.DB, from, to int64, amount float64) error {
	tx, err := db.BeginTx(ctx, &sql.TxOptions{
		Isolation: sql.LevelSerializable,
	})
	if err != nil {
		return err
	}
	defer tx.Rollback()

	// Debit
	res, err := tx.NewUpdate().
		Model((*Account)(nil)).
		Set("balance = balance - ?", amount).
		Where("id = ? AND balance >= ?", from, amount).
		Exec(ctx)
	if err != nil {
		return err
	}
	affected, _ := res.RowsAffected()
	if affected == 0 {
		return fmt.Errorf("insufficient balance or account not found")
	}

	// Credit
	_, err = tx.NewUpdate().
		Model((*Account)(nil)).
		Set("balance = balance + ?", amount).
		Where("id = ?", to).
		Exec(ctx)
	if err != nil {
		return err
	}

	return tx.Commit()
}

// RunInTx — the convenience wrapper
err := db.RunInTx(ctx, &sql.TxOptions{}, func(ctx context.Context, tx bun.Tx) error {
	// If the function returns nil, tx.Commit() is called.
	// If it returns an error or panics, tx.Rollback() is called.
	// No manual defer. No forgotten rollback.
	_, err := tx.NewUpdate().Model((*Account)(nil)).
		Set("balance = balance - ?", amount).
		Where("id = ?", from).
		Exec(ctx)
	return err
})

The RunInTx helper is worth noting because it eliminates the most common transaction bug in Go applications: the forgotten rollback. When you use BeginTx directly, you must remember defer tx.Rollback() — and if you forget, a panicking goroutine will leave an open transaction that holds locks until the connection is recycled or PostgreSQL's idle_in_transaction_session_timeout fires. RunInTx handles the commit-or-rollback decision for you: return nil for commit, return an error (or panic) for rollback.

GORM's db.Transaction(func(tx *gorm.DB) error { ... }) provides the same convenience. Both work well. The difference is that Bun's transaction object (bun.Tx) exposes the same query builder interface as the non-transactional bun.DB, so your query code does not change inside a transaction. With GORM, the transaction *gorm.DB also shares the same interface — so on this point, the two ORMs are equivalent. Credit where it is due.

Migrations: explicit up and down, not AutoMigrate

GORM's AutoMigrate is beloved by tutorial authors and feared by production engineers. It inspects your models, compares them to the database schema, and creates or alters tables to match. It will add columns. It will not drop columns. It will not remove indexes. It does not track migration history. It does not support rollbacks. It is the database equivalent of a housekeeper who tidies up but never throws anything away — and who keeps no record of what was rearranged.

Bun migrations — explicit, versioned, reversible
// Bun migrations — version-controlled, Go-native
import "github.com/uptrace/bun/migrate"

var Migrations = migrate.NewMigrations()

func init() {
	Migrations.MustRegister(func(ctx context.Context, db *bun.DB) error {
		// Up
		_, err := db.NewCreateTable().
			Model((*OrderItem)(nil)).
			IfNotExists().
			Exec(ctx)
		if err != nil {
			return err
		}

		// Create indexes explicitly — not hidden in struct tags
		_, err = db.NewCreateIndex().
			Model((*OrderItem)(nil)).
			Index("idx_order_items_order_id").
			Column("order_id").
			Exec(ctx)
		return err
	}, func(ctx context.Context, db *bun.DB) error {
		// Down
		_, err := db.NewDropTable().
			Model((*OrderItem)(nil)).
			IfExists().
			Exec(ctx)
		return err
	})
}

// Run from main:
//   migrator := migrate.NewMigrator(db, Migrations)
//   migrator.Init(ctx)    // creates bun_migrations table
//   migrator.Migrate(ctx) // applies pending migrations
//
// GORM's AutoMigrate creates columns but never drops them,
// never removes indexes, and does not track migration history.
// Bun's migrator is explicit: you write the up, you write the down.

Bun's migration system is explicit. You write the up function. You write the down function. The bun_migrations table tracks which migrations have run. You can roll back. You can see the history. Index creation is a separate, visible step — not hidden in struct tags that may or may not trigger an ALTER TABLE.

I should note that Bun's migration system is simpler than dedicated migration tools like golang-migrate or goose. It lacks features like SQL file migrations, migration locking for concurrent deployments, and atlas-style declarative diffing. For complex migration needs, you might prefer goose or golang-migrate alongside Bun. They compose cleanly — the migration tool manages schema, Bun manages queries.

Soft deletes: the hidden filter on every query

Both GORM and Bun support soft deletes — marking records as deleted rather than removing them — but the implementations differ in a way that has performance implications worth discussing.

Soft deletes — GORM's magic vs Bun's explicit tags
// Soft deletes — Bun vs GORM approaches
// GORM: add gorm.Model (includes DeletedAt) and GORM
// automatically adds WHERE deleted_at IS NULL to every query.
// Convenient. Also invisible. Also a full-table filter on
// every SELECT if you lack a partial index.

// Bun: explicit. You model the column, you write the filter.
type Customer struct {
	bun.BaseModel `bun:"table:customers,alias:c"`

	ID        int64      `bun:"id,pk,autoincrement"`
	Name      string     `bun:"name,notnull"`
	DeletedAt *time.Time `bun:"deleted_at,soft_delete,nullzero"`
}

// Bun's soft_delete tag adds WHERE deleted_at IS NULL automatically.
// But unlike GORM, you can see it in the struct definition.
// You know the filter is there. You can plan your indexes.

// The index that makes soft deletes fast:
// CREATE INDEX idx_customers_active ON customers (id)
//   WHERE deleted_at IS NULL;
// A partial index. Only indexes non-deleted rows.
// GORM's AutoMigrate will not create this for you.

GORM automatically adds WHERE deleted_at IS NULL to every query on a model with a DeletedAt field. This is convenient. It is also invisible. New team members do not know it is happening. The query in the code says db.Find(&customers) and the query that reaches PostgreSQL says SELECT * FROM customers WHERE deleted_at IS NULL. If you are debugging a missing-data issue and reading the Go code, you will not see the filter. You will only see it in the PostgreSQL query log or in EXPLAIN ANALYZE output.

More importantly, the WHERE deleted_at IS NULL filter requires an index to be efficient. Without one, every query on the table becomes a sequential scan filtered by deleted_at. GORM's AutoMigrate does not create this index. A partial index — CREATE INDEX ... WHERE deleted_at IS NULL — is the correct approach, and you must create it manually regardless of which ORM you use.

Bun's soft_delete tag adds the same filter, but the tag is visible in the struct definition. You see soft_delete on the field, and you know the filter is there. You can plan your indexes accordingly. Both ORMs ultimately generate the same SQL; the difference is discoverability.

OpenTelemetry hooks: observability without ceremony

You cannot optimize what you cannot observe. Bun integrates with OpenTelemetry through a query hook that takes one line to enable.

OpenTelemetry integration
import "github.com/uptrace/bun/extra/bunotel"

// One line. Every query gets a span.
db.AddQueryHook(bunotel.NewQueryHook(
	bunotel.WithDBName("myapp"),
	bunotel.WithFormattedQueries(true),
))

// Each query now emits an OpenTelemetry span with:
//   - db.statement (the SQL)
//   - db.operation (SELECT, INSERT, UPDATE, DELETE)
//   - db.sql.table (the target table)
//   - duration
//
// Pair with Jaeger, Datadog, or any OTel-compatible backend.
// Slow query detection is now a dashboard filter, not a log grep.

Every query becomes a span in your distributed trace. The span includes the SQL statement (with WithFormattedQueries, the parameters are inlined for readability — never in production with sensitive data), the operation type, the target table, and the duration. Feed this to Jaeger, Datadog, Grafana Tempo, or any OTel-compatible backend, and slow query detection becomes a dashboard filter rather than a log-grepping expedition.

GORM has a similar hook system through its Logger interface, but it does not integrate with OpenTelemetry natively. Third-party plugins exist (gorm.io/plugin/opentelemetry), but they require additional setup and the span attributes are less detailed — notably lacking the db.sql.table attribute that makes filtering by table trivial.

Custom hooks: composable query middleware

Bun's hook system is also extensible beyond OTel. You can add custom hooks for query logging, metrics collection, slow query detection, or circuit-breaking — all without modifying your query code.

Custom slow-query hook alongside OTel
// Custom query hook — log slow queries with context
type SlowQueryHook struct {
	threshold time.Duration
}

func (h *SlowQueryHook) BeforeQuery(ctx context.Context, event *bun.QueryEvent) context.Context {
	return ctx
}

func (h *SlowQueryHook) AfterQuery(ctx context.Context, event *bun.QueryEvent) {
	dur := time.Since(event.StartTime)
	if dur > h.threshold {
		log.Warn().
			Str("query", event.Query).
			Dur("duration", dur).
			Str("operation", event.Operation()).
			Msg("slow query detected")
	}
}

// Register alongside OTel — hooks compose cleanly
db.AddQueryHook(bunotel.NewQueryHook(bunotel.WithDBName("myapp")))
db.AddQueryHook(&SlowQueryHook{threshold: 100 * time.Millisecond})

// Both hooks fire for every query. The OTel hook emits spans.
// The slow query hook logs warnings. No conflict, no ordering issues.

Hooks compose. The OTel hook and the slow-query hook both fire for every query, in registration order. Neither interferes with the other. This is the sort of infrastructure plumbing that quietly prevents 3am pages — you discover the slow query from a dashboard alert, click through to the trace, and see exactly which endpoint, which handler, which line of code issued the problematic query. Compare with grepping production logs for "duration" and hoping the format has not changed since last month.

The raw SQL escape hatch: for when the builder is not enough

I have praised Bun's query builder extensively. Now I shall note its limits, because a waiter who overstates his case is no waiter at all. Window functions with custom frames, PERCENTILE_CONT, lateral joins, GENERATE_SERIES with complex logic, full-text search with ts_rank and custom dictionaries — these exist at the edges of what any query builder can express. When you reach those edges, use raw SQL. It is not a defeat. It is the correct tool for the problem.

Raw SQL with struct scanning
// When the query builder is not enough — raw SQL with scanning
type RevenueByTier struct {
	Tier         string  `bun:"tier"`
	CustomerCount int    `bun:"customer_count"`
	Revenue      float64 `bun:"revenue"`
	MedianOrder  float64 `bun:"median_order"`
}

var results []RevenueByTier
err := db.NewRaw(`
	SELECT c.tier,
	       COUNT(DISTINCT c.id) AS customer_count,
	       SUM(o.total) AS revenue,
	       PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY o.total) AS median_order
	FROM customers c
	JOIN orders o ON o.customer_id = c.id
	WHERE o.created_at >= ?
	GROUP BY c.tier
	ORDER BY revenue DESC
`, cutoffDate).Scan(ctx, &results)

// Bun scans raw results into structs by matching bun tags.
// No model registration required. No interface{} maps.
// The raw SQL is the query. The struct is the shape.

Bun scans raw query results into structs by matching bun tags to column names. This means you get the full power of raw SQL with the convenience of typed results — no interface{} maps, no manual type assertions, no rows.Scan(&a, &b, &c) with positional arguments that break when you add a column.

Window functions — raw SQL is the right choice
// Window functions — raw SQL is the right choice
type OrderWithRank struct {
	ID            int64   `bun:"id"`
	CustomerID    int64   `bun:"customer_id"`
	Total         float64 `bun:"total"`
	CustomerRank  int     `bun:"customer_rank"`
	RunningTotal  float64 `bun:"running_total"`
	PctOfCustomer float64 `bun:"pct_of_customer"`
}

var results []OrderWithRank
err := db.NewRaw(`
	SELECT id, customer_id, total,
	       ROW_NUMBER() OVER w AS customer_rank,
	       SUM(total) OVER (PARTITION BY customer_id
	                        ORDER BY created_at
	                        ROWS BETWEEN UNBOUNDED PRECEDING
	                        AND CURRENT ROW) AS running_total,
	       total / SUM(total) OVER (PARTITION BY customer_id) * 100
	         AS pct_of_customer
	FROM orders
	WHERE status = 'completed'
	WINDOW w AS (PARTITION BY customer_id ORDER BY total DESC)
	ORDER BY customer_id, customer_rank
`).Scan(ctx, &results)

// Window functions with named windows, custom frames,
// and percentage calculations. No query builder should
// attempt this. Raw SQL is not a fallback — it is the
// correct tool for the job.

Window functions are the canonical example of where raw SQL should be preferred without guilt. Named windows, custom frames (ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW), percentage calculations across partitions — no query builder should attempt this, and the ones that do produce code that is harder to read than the SQL it generates. Bun wisely does not try.

The key insight: Bun's raw SQL path and its query builder path both end at the same scanning layer. You do not lose type safety, logging, hook execution, or OpenTelemetry spans when you drop to raw SQL. This is a conscious design decision, and it means the transition between builder and raw is seamless rather than jarring. Your slow-query hook still fires. Your OTel spans still appear. The raw SQL is not a second-class citizen in Bun's architecture — it is a parallel entry point to the same pipeline.

Connection management: database/sql under the hood

Bun sits on top of Go's standard database/sql package. This is a deliberate architectural choice that has performance implications worth understanding — and pool sizing implications that catch many teams off guard.

Connection setup and pool configuration
package main

import (
	"database/sql"
	"fmt"
	"time"

	"github.com/uptrace/bun"
	"github.com/uptrace/bun/dialect/pgdialect"
	"github.com/uptrace/bun/driver/pgdriver"
)

func newDB() *bun.DB {
	dsn := "postgres://user:pass@localhost:5432/mydb?sslmode=disable"

	sqldb := sql.OpenDB(pgdriver.NewConnector(
		pgdriver.WithDSN(dsn),
		pgdriver.WithTimeout(5 * time.Second),
		pgdriver.WithDialTimeout(5 * time.Second),
		pgdriver.WithReadTimeout(5 * time.Second),
		pgdriver.WithWriteTimeout(5 * time.Second),
	))

	// Connection pool — these are database/sql settings
	sqldb.SetMaxOpenConns(20)        // 2-3x your CPU cores
	sqldb.SetMaxIdleConns(10)        // half of max open
	sqldb.SetConnMaxLifetime(30 * time.Minute)
	sqldb.SetConnMaxIdleTime(5 * time.Minute)

	db := bun.NewDB(sqldb, pgdialect.New())
	return db
}

// Bun sits on top of database/sql.
// This means standard Go connection pool management —
// no custom pool, no surprising defaults, no vendor lock-in.
// Add goldlapel-go (go get github.com/goldlapel/goldlapel-go),
// call goldlapel.Start(), and every query gets optimized transparently.

Because Bun uses database/sql, you get the standard Go connection pool with its well-tested, well-documented behavior. SetMaxOpenConns, SetMaxIdleConns, SetConnMaxLifetime — these are the same settings every Go database application uses. There is no Bun-specific pool configuration to learn, no custom pool implementation to debug, no vendor-specific knobs that interact in surprising ways.

Connection pool sizing guidelines
// Connection pool sizing — the PostgreSQL formula
//
// Max connections = (CPU cores * 2) + effective_spindle_count
//
// For SSDs (spindle_count ≈ 0):
//   8-core server → 16 connections
//
// For database/sql pools across N application instances:
//   MaxOpenConns per instance = PostgreSQL max_connections / N
//
// Example: PostgreSQL max_connections = 100, 5 app instances
//   MaxOpenConns = 100 / 5 = 20 per instance
//   MaxIdleConns = 10 (half of MaxOpen)
//
// Common mistake: setting MaxOpenConns too high
//   50 connections per instance x 5 instances = 250 connections
//   PostgreSQL default max_connections: 100
//   Result: "too many connections" errors under load
//
// The fix is fewer connections, not more.
// PostgreSQL performs better with 20 busy connections
// than with 200 mostly-idle connections competing for locks.

The defaults in database/sql are, it must be said, not ideal. MaxOpenConns defaults to unlimited (which can exhaust PostgreSQL's connection limit under load), and MaxIdleConns defaults to 2 (which means heavy traffic causes constant connection churn — establishing and tearing down TCP connections, negotiating TLS, authenticating). Set both explicitly. The formula is straightforward: MaxOpenConns at 2-3x your CPU cores, MaxIdleConns at half that, and ConnMaxLifetime at 30 minutes to prevent stale connections from accumulating.

A common mistake I encounter: teams running 5 application instances with MaxOpenConns=50 each, totaling 250 potential connections against a PostgreSQL instance with max_connections=100. Under sustained load, the 101st connection attempt fails with "too many connections for role." The fix is fewer connections, not more. PostgreSQL performs better with 20 busy connections sharing CPU time than with 200 mostly-idle connections competing for locks and consuming shared memory.

GORM also uses database/sql, so the pool behavior is identical. pgx has its own pool implementation (pgxpool) that is slightly more efficient for high-concurrency workloads — it avoids the database/sql mutex on connection checkout — but requires separate configuration. I have benchmarked the difference elsewhere on this site. The practical delta is negligible for most applications and significant for applications processing 50,000+ queries per second.

When GORM is the better choice

I have spent considerable time explaining where Bun outperforms GORM. It would be dishonest — and unbecoming — to stop there. GORM is the better choice in several specific scenarios, and I shall enumerate them without hedging.

  • Team familiarity. If your team knows GORM and ships features productively with it, the cost of switching ORMs is real. Not in implementation effort (you could rewrite a GORM application to Bun in a week), but in lost muscle memory, unfamiliar error messages, and the period of reduced productivity during the transition. If your GORM queries are fast enough, keep them.
  • Ecosystem and community. GORM has more Stack Overflow answers, more tutorials, more third-party plugins, and a larger community. When you encounter an edge case at 2am, the odds of finding a GORM answer are higher than finding a Bun answer. This matters for teams without a dedicated database expert.
  • Hook system complexity. If your application relies heavily on BeforeCreate/AfterCreate hooks, scopes, plugins, or callbacks, GORM's architecture supports these patterns natively. Bun's hooks are simpler — query hooks that see the SQL after it is built, not model-level lifecycle hooks that can modify behavior. If you need a hook that automatically sets created_by to the current user on every insert across 40 models, GORM's callback system handles this cleanly. With Bun, you would write a query hook or handle it in your application layer.
  • Multi-database support. If you need to target MySQL, SQLite, and PostgreSQL with the same codebase, GORM's dialect system is more mature. Bun supports all three, but its PostgreSQL support is the deepest and its MySQL/SQLite support receives less attention. If PostgreSQL is your only target, this is irrelevant.

The question is not "is Bun better than GORM" in the abstract. It is "does Bun's performance advantage and SQL fidelity matter for my specific application." If your queries are simple, your data volume is moderate, and your team is productive with GORM — keep GORM. The microseconds Bun saves are not worth the disruption.

If your queries are complex, your data volume is large, and you are tired of dropping to db.Raw() for every CTE — Bun is worth the switch.

A performance checklist for Bun and PostgreSQL

To summarize the patterns that matter most, in descending order of impact:

  1. Use Relation() for every related access in a loop. The N+1 problem is the single largest source of ORM-related latency. Bun's Relation() with filtered callbacks gives you precise control over what loads and how. If you do nothing else from this article, do this.
  2. Select only the columns you need. .Column("id", "total") instead of SELECT *. Less data transferred, less memory allocated, faster scans. On wide tables with JSONB or TEXT columns, this alone can cut query time in half.
  3. Bulk your writes. db.NewInsert().Model(&items) for batch inserts. ON CONFLICT for upserts. CTE-based updates for batch modifications. RETURNING to get generated values back without a second query. Individual inserts in a loop are a bug.
  4. Push computation to PostgreSQL. Use ColumnExpr with aggregate functions. Use CTEs for multi-step computations. Fetch 5 grouped rows, not 500,000 raw rows. PostgreSQL's query optimizer is better at aggregation than your Go for loop.
  5. Use .With() for CTEs. Complex reporting queries belong in a CTE, not in multiple separate queries stitched together in Go. Recursive CTEs for hierarchical data. Multiple CTEs for multi-step transformations. PostgreSQL 12+ inlines simple CTEs automatically.
  6. Configure your connection pool. Set MaxOpenConns and MaxIdleConns explicitly. The defaults are not production-ready. Total connections across all instances must stay under PostgreSQL's max_connections.
  7. Enable OpenTelemetry hooks. You cannot optimize what you cannot observe. One line of code gives you full query tracing. Add a slow-query hook for alerting. Hooks compose cleanly.
  8. Use raw SQL without guilt. Bun's raw path and builder path share the same scanning and hook infrastructure. Window functions, lateral joins, PERCENTILE_CONT — write them in SQL. The struct scanner does not care how the SQL was produced.
  9. Profile with EXPLAIN ANALYZE. Not EXPLAIN — EXPLAIN ANALYZE, with actual execution times. The query plan will show you sequential scans, hash joins, sort spills to disk, and every other decision PostgreSQL made. Read the plan. It is telling you what to fix. I have written a complete guide to EXPLAIN ANALYZE if the output looks unfamiliar.
  10. Index your soft delete filter. If you use Bun's soft_delete tag, create a partial index on WHERE deleted_at IS NULL. Without it, every query on a soft-deleted model performs a full-table filter.

What happens below the ORM

Everything in this guide operates at the application layer — choosing the right Bun methods, structuring queries to minimize overhead, using bulk operations. These are necessary. They are also insufficient. Permit me to explain what I mean.

Bun generates clean, predictable SQL. This is not merely an aesthetic virtue — it means the SQL is legible to the tools that sit between your application and PostgreSQL. A SELECT with explicit columns, a properly structured CTE, a bulk INSERT with typed values — these are patterns that a query-aware proxy can analyze, categorize, and optimize without misinterpreting the intent.

Gold Lapel is that proxy. It observes the actual query traffic flowing from your Go application to PostgreSQL. It detects missing indexes from access patterns — the WHERE status = 'pending' that hits a sequential scan because no one created idx_orders_status — and creates them. It identifies repeated expensive queries — the reporting CTEs that run 200 times per hour with identical parameters — and materializes the results. It catches N+1 patterns that slipped past code review, that the team knows about but has not prioritized, and batches them before they reach the database.

Bun's SQL-first philosophy makes it an ideal companion for this kind of analysis. The generated SQL is close enough to hand-written that pattern detection works with full fidelity — no ORM-generated noise to parse through, no unnecessary wrapper queries, no BEGIN; SELECT 1; your_actual_query; COMMIT ceremony that some ORMs add for transaction detection. The query Bun sends is the query that gets analyzed. The query that gets analyzed is the query that gets optimized.

Write good Bun code. Use Relation(), use .With(), use bulk operations. Profile with EXPLAIN ANALYZE. Configure your connection pool. Then let the infrastructure handle the rest — the index suggestions you have not thought of yet, the query caching for patterns you have not noticed, the N+1 detection for code paths you have not reviewed. The ORM that respects your database pairs naturally with the proxy that makes your database faster.

The household runs well when every member of the staff attends to their duties with competence and discretion. Bun attends to your queries. Gold Lapel attends to your database. And I shall attend to whatever you need next.

Frequently asked questions

Terms referenced in this article

One further thought, if I may. Bun's SQL-first approach raises a question that applies to every ORM: at what point does the abstraction cost more than it saves? I have addressed that question directly, with benchmarks, in ORM vs Raw SQL Performance.