← How-To

How to Set Up pgAudit for SOC 2 and HIPAA Compliance

A well-run household keeps proper records. If you'll permit me, I should like to ensure yours are in order before the auditor arrives.

The Butler of Gold Lapel · March 21, 2026 · 18 min read
The commissioned artwork has not yet arrived. We shall have a word with the courier.

Why compliance requires audit logging at the database level

Application-level logging records what your code intended to do. Database-level audit logging records what actually happened. Compliance frameworks — quite rightly — care about the latter, because the database is the system of record. It is where the data lives, and it is where unauthorized access has consequences.

HIPAA's Technical Safeguards (45 CFR §164.312) require organizations to "implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information." The regulation does not suggest logging. It requires it. And it requires that the logs be examined — logging without review is a common audit finding, and one I encounter with some regularity.

SOC 2's Trust Services Criteria are more framework-oriented but equally specific in their expectations. CC6.1 requires logical access controls that restrict the ability to access, modify, or delete data to authorized users. CC6.2 and CC6.3 require that credential issuance and access changes are authorized and documented. CC7.2 requires monitoring system components for anomalies indicating malicious acts, natural disasters, or errors. All of these controls assume the existence of audit logs that can answer specific questions: who accessed what, when, and whether they were authorized to do so.

The gap between "we have logging" and "we have compliance-grade audit logging" is significant. An auditor does not want to see that you logged something. They want to see that you logged the right things, in a structured format, with sufficient detail to reconstruct events, stored in a tamper-evident location, and retained for the required period. pgaudit exists to bridge that gap for PostgreSQL. Allow me to show you how.

Why log_statement is not sufficient for compliance

PostgreSQL ships with built-in statement logging via the log_statement parameter. Set it to all and every SQL statement appears in the server log. This is adequate for debugging. It is not adequate for compliance, and the distinction — if you'll forgive the directness — is the difference between a household that keeps a guest book and one that merely leaves the front door ajar.

log_statement = 'all'
-- PostgreSQL's built-in logging
ALTER SYSTEM SET log_statement = 'all';
SELECT pg_reload_conf();

-- What you get in the log:
-- 2026-03-21 14:07:32.441 UTC [1842] LOG:  statement: SELECT ssn, diagnosis
--   FROM patient_records WHERE patient_id = 7743;

-- That is the entire record. No structure. No classification.
-- No distinction between a routine SELECT on a config table
-- and a SELECT on patient health records.

log_statement produces unstructured text. Every statement looks the same — a SELECT on a configuration table is indistinguishable from a SELECT on patient health records unless you parse the SQL text yourself. I find this personally concerning. There is no classification by statement type, no identification of the objects accessed, no separation of routine operations from sensitive ones. The pgaudit README states the problem directly: "Basic statement logging can be provided by the standard logging facility with log_statement = all. This is acceptable for monitoring and other usages but does not provide the level of detail generally required for an audit."

pgaudit addresses each of these limitations:

pgaudit structured output
-- With pgaudit configured:
-- AUDIT: OBJECT,1,1,READ,SELECT,TABLE,public.patient_records,
--   "SELECT ssn, diagnosis FROM patient_records WHERE patient_id = 7743",<not logged>

-- Structured fields: audit type, statement ID, substatement ID,
-- class (READ), command (SELECT), object type (TABLE),
-- object name (public.patient_records), SQL text, parameters

The differences that matter for compliance:

  • Structured fields — each log entry contains labeled fields (audit type, class, command, object type, object name, statement, parameters) that can be parsed, filtered, and queried without custom regex.
  • Statement classification — every entry is tagged as READ, WRITE, DDL, ROLE, FUNCTION, or MISC. An auditor asking "show me all schema changes" can filter on class = DDL rather than pattern-matching against SQL text.
  • Object identification — object audit entries include the fully-qualified table name. "Which queries touched patient_records last Tuesday?" becomes a simple filter rather than a full-text search through megabytes of SQL.
  • Granular targeting — object audit logging records only statements affecting specific tables, while log_statement is all-or-nothing. One monitors the silver; the other points a camera at the entire estate and hopes for the best.
  • Parameter logging — with pgaudit.log_parameter = on, the actual values passed to queries are recorded. log_statement logs the SQL text but not the bind parameters used by prepared statements — which means parameterized queries (the kind every modern application uses) are logged without the values that matter.

Installation and configuration

pgaudit must be loaded as a shared library at PostgreSQL startup. This requires a one-time restart when first enabling it — plan accordingly for production deployments. One does not rearrange the household mid-dinner.

Installation steps
-- 1. Install the system package (distro-specific)
-- Debian/Ubuntu:  apt install postgresql-16-pgaudit
-- RHEL/CentOS:    dnf install pgaudit_16
-- Amazon Linux 2: yum install pgaudit_16

-- 2. Add to postgresql.conf (requires restart)
shared_preload_libraries = 'pgaudit'

-- 3. Restart PostgreSQL, then create the extension
CREATE EXTENSION pgaudit;

-- 4. Verify installation
SELECT extname, extversion FROM pg_extension WHERE extname = 'pgaudit';

After installation, pgaudit defaults to logging nothing (pgaudit.log = none). Every logging decision is explicit — a sensible default that I appreciate. The following sections cover the two logging modes and their configuration in detail.

Session audit logging: broad coverage by statement class

Session audit logging captures all statements matching the configured pgaudit.log classes, regardless of which tables they affect. It is the simpler of the two modes and provides the baseline coverage that most compliance configurations begin with. Consider it the standing order for the household.

Session audit configuration
-- Session audit: logs all statements matching configured classes
-- Set at the system level for baseline compliance coverage
ALTER SYSTEM SET pgaudit.log = 'write, ddl, role';
SELECT pg_reload_conf();

-- Override per database for stricter requirements
ALTER DATABASE healthcare SET pgaudit.log = 'read, write, ddl, role';

-- Override per role for high-privilege accounts
ALTER ROLE dba_admin SET pgaudit.log = 'all';

The pgaudit.log parameter accepts a comma-separated list of statement classes. Classes can be subtracted with a minus prefix for fine-grained control:

Log class reference
-- pgaudit.log class reference
-- READ       SELECT and COPY when source is a relation
-- WRITE      INSERT, UPDATE, DELETE, TRUNCATE, COPY to a relation
-- FUNCTION   Function calls and DO blocks
-- ROLE       GRANT, REVOKE, CREATE/ALTER/DROP ROLE
-- DDL        CREATE, ALTER, DROP (except ROLE statements)
-- MISC       DISCARD, FETCH, CHECKPOINT, VACUUM, SET
-- MISC_SET   SET and RESET commands specifically
-- ALL        All of the above
-- NONE       No logging (default)

-- Combine classes with commas
ALTER SYSTEM SET pgaudit.log = 'write, ddl, role';

-- Subtract classes with a minus prefix
ALTER SYSTEM SET pgaudit.log = 'all, -misc, -misc_set';

Recommended session settings for compliance

Beyond the log classes, several additional settings deserve your attention for compliance-grade audit logging:

Additional session audit settings
-- Additional session audit settings
-- Log the query parameters (critical for compliance)
ALTER SYSTEM SET pgaudit.log_parameter = on;

-- Log statements in a nested call (functions, triggers)
ALTER SYSTEM SET pgaudit.log_nested_statements = on;

-- Log a separate entry for each relation referenced
ALTER SYSTEM SET pgaudit.log_relation = on;

-- Include statement text with every log entry (default: off)
-- When on, logs statement text only once per statement/substatement pair
ALTER SYSTEM SET pgaudit.log_statement_once = off;

SELECT pg_reload_conf();

pgaudit.log_parameter is the most important of these. Without it, parameterized queries are logged with $1, $2 placeholders but no actual values — which means your audit trail records that a query ran but not what data was accessed. I must be direct: an audit log that says "someone looked at a patient record" without recording which patient is not an audit log that will satisfy anyone whose job title contains the word "compliance." Enable it.

pgaudit.log_relation is useful when a single statement touches multiple tables (via JOINs, subqueries, or views). With this enabled, pgaudit emits a separate log entry for each relation referenced, making it straightforward to find every statement that touched a specific table without parsing SQL.

Configuration hierarchy

pgaudit settings follow PostgreSQL's standard parameter hierarchy: system-wide (postgresql.conf or ALTER SYSTEM), per-database (ALTER DATABASE), per-role (ALTER ROLE), and per-session (SET). This allows a layered approach: set baseline coverage globally, increase coverage for databases containing sensitive data, and maximize coverage for privileged roles.

Object audit logging: targeted monitoring for sensitive tables

Object audit logging is where pgaudit earns its place in the household. Instead of logging every statement matching a class, it logs only statements that affect specific tables — the tables you have identified as containing protected data. This is the difference between monitoring every room and stationing someone at the door of the vault.

It works through PostgreSQL's GRANT mechanism — an elegant design, if I may say so. You create a dedicated audit role, grant it permissions on the tables you want to monitor, and pgaudit logs any statement that would require those permissions. The audit role never logs in and never executes anything — it exists solely to define the audit scope.

Object audit configuration
-- Object audit: targeted logging for sensitive tables via GRANT
-- 1. Create a dedicated audit role (no login, no privileges)
CREATE ROLE pgaudit_role NOLOGIN;

-- 2. Tell pgaudit to use this role for object auditing
ALTER SYSTEM SET pgaudit.role = 'pgaudit_role';
SELECT pg_reload_conf();

-- 3. Grant permissions on tables containing protected data
-- Every operation matching these grants will be logged
GRANT SELECT, INSERT, UPDATE, DELETE ON patient_records TO pgaudit_role;
GRANT SELECT, INSERT, UPDATE, DELETE ON payment_methods TO pgaudit_role;
GRANT SELECT ON audit_log TO pgaudit_role;

-- 4. Audit specific columns (PostgreSQL column-level GRANT)
GRANT SELECT (ssn, date_of_birth) ON employees TO pgaudit_role;

This approach has several advantages for compliance, each worth noting:

  • Precise scope — you are auditing access to patient_records and payment_methods specifically, not every SELECT in the database. This demonstrates intentional monitoring of sensitive data, which auditors value more than broad, unfocused logging.
  • Lower log volume — on a typical application database, the vast majority of queries touch non-sensitive tables (sessions, cache, configuration). Object audit logging skips all of those, reducing log volume and storage costs dramatically.
  • Column-level auditing — PostgreSQL supports column-level GRANT, and pgaudit respects it. You can audit access to specific columns (SSN, date of birth) without logging every query that touches the broader table.
  • No application changes — the audit configuration is entirely within PostgreSQL. No ORM changes, no middleware, no application code modifications.

Auditing all tables in a schema

If your protected data lives in a dedicated schema — a pattern I would warmly recommend for HIPAA deployments — you can grant audit permissions across the entire schema:

Schema-wide object auditing
-- Audit all tables in a schema containing PHI
DO $$
DECLARE
  tbl text;
BEGIN
  FOR tbl IN
    SELECT table_name FROM information_schema.tables
    WHERE table_schema = 'phi' AND table_type = 'BASE TABLE'
  LOOP
    EXECUTE format(
      'GRANT SELECT, INSERT, UPDATE, DELETE ON phi.%I TO pgaudit_role',
      tbl
    );
  END LOOP;
END $$;

I should note: this grant applies to existing tables only. New tables added to the schema will need their own GRANT. Automate this with an event trigger or include it in your migration process — an unmonitored table in a PHI schema is precisely the sort of oversight that auditors are trained to find.

Combining session and object logging

Session and object audit logging are not mutually exclusive. A typical compliance configuration uses both: session logging for WRITE, DDL, and ROLE classes (capturing all data modifications, schema changes, and permission changes globally), and object logging for READ operations on specific sensitive tables (capturing who queried protected data without logging every routine SELECT).

Log format and parsing

pgaudit entries appear in the standard PostgreSQL log, prefixed with AUDIT:. Each entry follows a consistent, comma-separated format — structured, predictable, and ready for inspection at a moment's notice:

Example audit log entries
-- Session audit log entry:
-- AUDIT: SESSION,1,1,WRITE,INSERT,,,
--   "INSERT INTO patient_records (patient_id, ssn, diagnosis)
--    VALUES ($1, $2, $3)",
--   7743,'123-45-6789','Type 2 Diabetes'

-- Object audit log entry:
-- AUDIT: OBJECT,2,1,READ,SELECT,TABLE,public.patient_records,
--   "SELECT ssn, diagnosis FROM patient_records WHERE patient_id = $1",
--   7743

-- DDL audit log entry:
-- AUDIT: SESSION,3,1,DDL,ALTER TABLE,TABLE,public.patient_records,
--   "ALTER TABLE patient_records ADD COLUMN insurance_id integer",
--   <none>

-- ROLE change audit log entry:
-- AUDIT: SESSION,4,1,ROLE,GRANT,,,
--   "GRANT SELECT ON patient_records TO analyst_role",
--   <none>

The field order is fixed and documented:

Log field reference
-- Log entry field order:
-- AUDIT: <type>,<statement_id>,<substatement_id>,<class>,<command>,
--        <object_type>,<object_name>,<statement>,<parameter>
--
-- Field breakdown:
-- type             SESSION or OBJECT
-- statement_id     Sequential ID within the session
-- substatement_id  Sequential ID for sub-statements (triggers, functions)
-- class            READ, WRITE, DDL, ROLE, FUNCTION, MISC, MISC_SET
-- command          SQL command: SELECT, INSERT, UPDATE, DELETE, CREATE TABLE...
-- object_type      TABLE, INDEX, VIEW, FUNCTION, SCHEMA, SEQUENCE (OBJECT only)
-- object_name      Fully-qualified name: schema.table (OBJECT only)
-- statement        The SQL text, with $N placeholders if parameterized
-- parameter        Bound values as CSV, <none> if no params, <not logged> if disabled

Parsing audit logs with SQL

For organizations that need to query audit logs directly — during incident investigation, access reviews, or the rather more urgent occasion of an auditor arriving ahead of schedule — loading the CSV log into a PostgreSQL table provides full SQL query capability:

Loading and querying audit logs
-- Configure PostgreSQL to write CSV-formatted logs
-- (makes parsing pgaudit entries straightforward)
-- Note: logging_collector requires a restart, not just a reload
ALTER SYSTEM SET log_destination = 'csvlog';
ALTER SYSTEM SET logging_collector = on;
ALTER SYSTEM SET log_directory = 'pg_log';
ALTER SYSTEM SET log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log';
-- Restart PostgreSQL, then verify with: SHOW logging_collector;

-- Then filter audit entries from the CSV log:
-- grep "^AUDIT:" postgresql-2026-03-21.csv

-- Or load into a table for SQL-based analysis:
CREATE TABLE audit_log_import (
  log_time          timestamptz,
  user_name         text,
  database_name     text,
  process_id        integer,
  connection_from   text,
  session_id        text,
  session_line_num  bigint,
  command_tag       text,
  session_start     timestamptz,
  virtual_txid      text,
  txid              bigint,
  error_severity    text,
  sql_state         text,
  message           text,
  detail            text,
  hint              text,
  internal_query    text,
  internal_position integer,
  context           text,
  query             text,
  query_pos         integer,
  location          text,
  application_name  text
);

COPY audit_log_import FROM '/var/lib/postgresql/16/data/pg_log/postgresql-2026-03-21.csv'
  WITH (FORMAT csv);

-- Query audit entries
SELECT log_time, user_name, database_name, message
FROM audit_log_import
WHERE message LIKE 'AUDIT:%'
  AND message LIKE '%patient_records%'
ORDER BY log_time DESC;

This approach is useful for ad-hoc investigation but not a replacement for a proper SIEM or log management system. For ongoing compliance, ship logs to a centralized platform where they can be indexed, searched, alerted on, and retained according to policy. A proper household does not keep the ledgers in a drawer beside the back door.

Log management, rotation, and retention

Generating audit logs is half the requirement. The other half — and I find this is where many organizations falter — is managing them: rotation to prevent disk exhaustion, retention to satisfy compliance periods, and shipping to a tamper-evident store that an auditor will accept.

Log rotation and shipping
-- postgresql.conf: log rotation settings
-- Rotate daily, keep 90 days (SOC 2 minimum retention)
log_rotation_age = '1d'
log_rotation_size = 0
log_truncate_on_rotation = on
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

-- For HIPAA: retain logs for minimum 6 years
-- Use external log shipping — do not store 6 years on the database server

-- Ship logs to a centralized, append-only store
-- Common approaches:
--   rsyslog/syslog-ng  -> Splunk / ELK / Datadog
--   Fluentd/Fluent Bit -> S3 / CloudWatch / BigQuery
--   Vector             -> any SIEM or object store

Retention requirements by framework

FrameworkMinimum retentionNotes
SOC 212 months (typical)Not formally specified, but auditors expect 12 months of accessible logs
HIPAA6 yearsFrom date of creation or date policy was last in effect, whichever is later
PCI-DSS12 monthsMost recent 3 months must be immediately available for analysis
GDPRVariesRetention must be proportionate to purpose; delete when no longer needed

Tamper-evidence requirements

Compliance frameworks require that audit logs be protected from modification. An audit trail that can be edited after the fact is not an audit trail — it is a polite fiction. Practical approaches to tamper-evidence:

  • Immutable storage — S3 Object Lock (WORM mode), Azure Immutable Blob Storage, or Google Cloud Storage retention policies prevent deletion or modification of stored logs.
  • Append-only log shipping — ship logs to a centralized system where the database team does not have delete permissions. Separation of duties matters: the people whose actions are being audited should not control the audit logs. This is not a matter of trust — it is a matter of structure.
  • Log integrity verification — some SIEM platforms (Splunk, Datadog) provide hash-chain verification of log integrity. This allows you to prove that logs have not been altered since ingestion.

Storing audit logs on the database server itself fails the tamper-evidence requirement. A DBA with superuser access can modify or delete log files. Ship logs off the server as soon as they are written. I cannot stress this sufficiently.

Performance impact and mitigation

I should be forthcoming about the cost: pgaudit adds overhead. Compliance should not come at the expense of performance, and pretending there is no trade-off would be a disservice to you. Every audited statement generates additional I/O to write the structured log entry. The magnitude depends on what you are logging and how your database is used.

Measuring audit logging overhead
-- Measure baseline: pgaudit.log = 'none'
-- Run pgbench with standard workload
pgbench -c 16 -j 4 -T 60 -P 10 mydb

-- Enable write + DDL logging (typical compliance config)
ALTER SYSTEM SET pgaudit.log = 'write, ddl';
SELECT pg_reload_conf();

-- Re-run benchmark
pgbench -c 16 -j 4 -T 60 -P 10 mydb

-- Typical overhead: 2-5% TPS reduction for write+ddl
-- With pgaudit.log = 'all' on a read-heavy workload: 8-15%
-- The I/O to log files is the bottleneck, not CPU

Typical overhead ranges:

ConfigurationWorkloadTypical overhead
pgaudit.log = 'write, ddl, role'Mixed OLTP2-5% TPS reduction
pgaudit.log = 'all, -misc, -misc_set'Mixed OLTP5-10% TPS reduction
pgaudit.log = 'all'Read-heavy8-15% TPS reduction
Object audit on 5 tablesMixed OLTP1-3% TPS reduction

The primary bottleneck is log I/O, not CPU. Each audit entry is a synchronous write to the PostgreSQL log file. On workloads that generate thousands of auditable statements per second, the disk bandwidth consumed by logging can become significant. The good news: this is a problem with well-understood mitigations.

Mitigation strategies

Reducing audit logging overhead
-- Strategy 1: Use object audit logging instead of session READ
-- Instead of logging every SELECT in the database:
ALTER SYSTEM SET pgaudit.log = 'write, ddl, role';
-- Audit reads only on sensitive tables via object logging:
GRANT SELECT ON patient_records TO pgaudit_role;
GRANT SELECT ON payment_methods TO pgaudit_role;

-- Strategy 2: Exclude noisy statement classes
ALTER SYSTEM SET pgaudit.log = 'all, -misc, -misc_set';

-- Strategy 3: Move log files to fast, separate storage
-- postgresql.conf:
-- log_directory = '/mnt/fast-ssd/pg_log'

-- Strategy 4: Use asynchronous log shipping
-- Write logs locally, ship to SIEM asynchronously
-- Never parse or analyze logs on the database server itself

The single most effective mitigation is using object audit logging for READ operations instead of session-level pgaudit.log = 'read'. On a typical application database, READ operations outnumber WRITE operations by 10-100x. Logging all reads generates enormous volume; logging reads on five specific sensitive tables generates a fraction of that while providing the same compliance coverage. Precision, not volume, is what the auditor requires of you.

Cloud provider setup

pgaudit is supported by all major managed PostgreSQL services, though each provider has its own way of managing the configuration. I shall attend to the particulars for each.

AWS RDS and Aurora PostgreSQL

AWS RDS / Aurora setup
-- AWS RDS / Aurora PostgreSQL
-- 1. Create a custom parameter group (cannot modify default)
-- 2. Set shared_preload_libraries = 'pgaudit' in the parameter group
-- 3. Reboot the instance to load the library
-- 4. Connect and create the extension:
CREATE EXTENSION pgaudit;

-- 5. Set audit parameters in the parameter group:
--    pgaudit.log = 'write, ddl, role'
--    pgaudit.log_parameter = 'on'
--    pgaudit.role = 'rds_pgaudit'

-- 6. Create the audit role (RDS uses 'rds_pgaudit' by convention)
CREATE ROLE rds_pgaudit;
GRANT SELECT, INSERT, UPDATE, DELETE ON patient_records TO rds_pgaudit;

-- 7. Audit logs appear in CloudWatch Logs (postgresql log group)
-- Set log retention in CloudWatch for compliance requirements

AWS documentation for pgaudit covers the full setup. Key differences from self-managed: you cannot edit postgresql.conf directly — all parameters are set via custom parameter groups, and the instance must be rebooted to load the shared library. Use rds_pgaudit as the audit role name by convention.

Google Cloud SQL for PostgreSQL

Cloud SQL setup
-- Google Cloud SQL for PostgreSQL
-- 1. Enable the pgaudit flag:
-- gcloud sql instances patch INSTANCE_NAME \
--   --database-flags cloudsql.enable_pgaudit=on
-- (instance restarts automatically)

-- 2. Connect and create the extension:
CREATE EXTENSION pgaudit;

-- 3. Set audit parameters via database flags:
-- gcloud sql instances patch INSTANCE_NAME \
--   --database-flags \
--     cloudsql.enable_pgaudit=on,\
--     pgaudit.log=write\,ddl\,role,\
--     pgaudit.log_parameter=on

-- 4. Configure object auditing:
ALTER SYSTEM SET pgaudit.role = 'pgaudit_role';
SELECT pg_reload_conf();
CREATE ROLE pgaudit_role NOLOGIN;
GRANT SELECT, INSERT, UPDATE, DELETE ON phi_table TO pgaudit_role;

-- 5. Audit logs appear in Cloud Logging (operations suite)
-- Route to BigQuery or Cloud Storage for long-term retention

Cloud SQL pgaudit documentation covers the flag-based configuration. The cloudsql.enable_pgaudit flag must be enabled before any other pgaudit parameters take effect. The instance restarts automatically when this flag is changed.

Azure Database for PostgreSQL

Azure Flexible Server setup
-- Azure Database for PostgreSQL (Flexible Server)
-- 1. Set server parameters via Azure Portal or CLI:
-- az postgres flexible-server parameter set \
--   --resource-group mygroup --server-name myserver \
--   --name shared_preload_libraries --value pgaudit
-- (server restarts)

-- 2. Set pgaudit parameters:
-- az postgres flexible-server parameter set \
--   --resource-group mygroup --server-name myserver \
--   --name pgaudit.log --value 'write, ddl, role'

-- 3. Connect and create the extension:
CREATE EXTENSION pgaudit;

-- 4. Configure object auditing the same as self-managed:
CREATE ROLE pgaudit_role NOLOGIN;
ALTER SYSTEM SET pgaudit.role = 'pgaudit_role';
SELECT pg_reload_conf();

-- 5. Enable Azure Diagnostic Settings to route logs to:
--    Log Analytics workspace, Storage account, or Event Hub
-- Select "PostgreSQLLogs" category for audit log capture

Azure pgaudit documentation covers the parameter configuration via the Azure Portal or CLI. Azure provides native integration with Log Analytics workspaces for long-term retention and querying of audit logs.

Mapping pgaudit to compliance frameworks

The following table maps specific SOC 2 and HIPAA controls to pgaudit configuration. A necessary caveat: this is not legal advice — engage your compliance team or auditor for your specific requirements. What it is, however, is a practical starting point for translating regulatory language into database configuration, and in my experience, having this translation prepared before the auditor asks for it is the difference between a smooth review and an uncomfortable one.

FrameworkControlRequirementpgaudit configuration
SOC 2CC6.1Logical access controls that restrict ability to access, modify, or delete data to authorized usersObject audit logging on sensitive tables. Log WRITE and DDL classes. Enable pgaudit.log_parameter for full query capture.
SOC 2CC6.2Prior to issuing system credentials, register and authorize new usersLog ROLE class to capture all GRANT, REVOKE, CREATE/ALTER/DROP ROLE statements.
SOC 2CC6.3Authorize, modify, or remove access based on job functionLog ROLE class. Correlate GRANT/REVOKE logs with access review tickets in your SIEM.
SOC 2CC7.2Monitor system components for anomalies indicating malicious acts or errorsShip structured audit logs to SIEM. Alert on unusual READ patterns, DDL changes outside maintenance windows, and ROLE changes.
HIPAA§164.312(b)Implement mechanisms to record and examine activity in systems containing ePHIObject audit logging on all tables containing PHI. Enable pgaudit.log_parameter to capture which records were accessed.
HIPAA§164.312(a)(1)Allow access only to authorized persons or programs (unique user identification)Ensure each user has a unique database role. pgaudit logs include the session user — do not use shared service accounts for PHI access.
HIPAA§164.312(c)(1)Protect ePHI from improper alteration or destruction (integrity controls)Log WRITE class on PHI tables. Ship logs to immutable storage (S3 Object Lock, WORM-compliant storage).
HIPAA§164.312(d)Verify identity of person seeking access to ePHICombine pgaudit session user with application-level user context. Use SET application_name or a custom GUC to pass end-user identity.

What auditors actually ask for

Compliance frameworks describe requirements in abstract terms. Auditors translate those into specific evidence requests. When the auditor arrives — and they will arrive — here is what you should be prepared to produce:

  • "Show me all access to customer PII in the last 90 days." — Object audit logs for your PII tables, filtered by date range. This is why object audit logging on specific tables is essential.
  • "Show me all schema changes in the last 12 months." — Session audit logs filtered on class = DDL. Every CREATE, ALTER, and DROP is recorded.
  • "Show me who was granted access to the production database and when." — Session audit logs filtered on class = ROLE. Every GRANT and REVOKE is recorded.
  • "Demonstrate that audit logs cannot be modified by the database administrator." — Evidence of log shipping to an immutable, external store. This is the tamper-evidence requirement.
  • "Show me your log review process." — SIEM dashboards, alerting rules, and evidence of regular review. pgaudit generates the logs; your operational processes demonstrate that someone is reading them.

Compliance readiness verification

Before concluding, I would recommend running these queries to verify your pgaudit configuration is audit-ready. Better to discover a gap now than when someone is taking notes:

Compliance readiness checklist
-- Compliance readiness checklist: run these queries to verify your setup

-- 1. Verify pgaudit is loaded
SELECT name, setting FROM pg_settings WHERE name = 'shared_preload_libraries';
-- Should include 'pgaudit'

-- 2. Check current audit logging level
SELECT name, setting, source FROM pg_settings WHERE name LIKE 'pgaudit%';

-- 3. Verify the audit role exists and has grants
SELECT grantee, table_schema, table_name, privilege_type
FROM information_schema.role_table_grants
WHERE grantee = 'pgaudit_role'
ORDER BY table_schema, table_name;

-- 4. Confirm log_parameter is enabled
SHOW pgaudit.log_parameter;
-- Should return 'on'

-- 5. Verify log destination supports compliance retention
SHOW log_destination;
SHOW log_directory;
SHOW log_rotation_age;

How Gold Lapel helps maintain performance with audit logging enabled

pgaudit adds overhead proportional to the number of auditable statements. Every logged statement means additional I/O — and on high-throughput databases, that overhead accumulates. This is the honest counterpoint to everything above: compliance has a cost, and you should not pretend otherwise.

Gold Lapel reduces the total number of statements that reach PostgreSQL. By optimizing query patterns at the proxy level — collapsing redundant queries, improving execution plans, eliminating unnecessary round trips — the same application workload produces fewer database operations. Fewer operations means fewer audit log entries for the same business logic, which reduces both the I/O overhead of pgaudit and the storage volume of the resulting logs.

I should be candid: this is not the primary reason to use Gold Lapel. But if you are running pgaudit on a high-throughput database and the logging overhead is a concern, the query reduction is a genuine side benefit. A database that processes 10,000 queries per second generates more audit log volume than one that achieves the same application outcomes with 3,000 queries per second. The arithmetic is straightforward.

Gold Lapel also provides visibility into the query patterns that matter for compliance planning. Understanding which tables are queried most frequently, by which application paths, helps you make informed decisions about which tables need object audit logging and which statement classes to include in session logging. Informed auditing is both more effective and more efficient than broad, unfocused logging. And a well-prepared household, in my experience, is never caught off guard by a visitor.

Frequently asked questions