GL Mesh
A write on one instance. Every peer invalidated within milliseconds. No configuration required.
Overview
When you run multiple Gold Lapel instances pointing at the same PostgreSQL database, each instance maintains its own L1 and L2 cache. A write that arrives through one instance needs to invalidate cached data on every other instance as well — otherwise the peers may serve stale reads until their next refresh cycle.
GL Mesh solves this. Enable it with a single flag, and all instances form a peer-to-peer network that broadcasts cache invalidation events in real time. A write on instance A invalidates the relevant cache entries on instances B, C, and D within milliseconds. No external message broker. No Redis pub/sub. No shared cache layer to manage. The instances coordinate directly.
If you are running a single Gold Lapel instance, mesh is unnecessary — Gold Lapel's built-in write detection handles invalidation locally. Mesh becomes valuable the moment you scale to two or more instances.
When to use mesh
The answer is straightforward: if you have more than one Gold Lapel instance pointing at the same database, enable mesh.
Common scenarios where mesh applies:
- Multiple application servers — each running its own Gold Lapel sidecar, all connecting to the same PostgreSQL database
- Kubernetes deployments — GL running as a sidecar container in each pod, with pods scaling up and down
- Regional deployments — GL instances in different availability zones or regions, sharing a primary database
- Blue-green deployments — during a deployment, both the old and new set of instances need synchronized caches
Without mesh, each instance detects writes that pass through its own proxy and invalidates accordingly. But a write that arrives through instance A is invisible to instance B — instance B will continue serving its cached copy until the next matview refresh or until it independently detects the change via write detection. With mesh, that window shrinks from seconds to milliseconds.
Setup
Add the --mesh flag. That is the entire setup.
goldlapel --upstream 'postgresql://user:pass@localhost:5432/mydb' --mesh Or in your configuration file:
# goldlapel.toml
[mesh]
enabled = true Or as an environment variable:
GOLDLAPEL_MESH=true On startup, Gold Lapel registers itself in the shared database, discovers existing peers, and connects to them. You will see the connections in the startup log:
mesh: enabled — peer id a3f1c2d4
mesh: discovered 2 peers via _goldlapel.mesh_peers
mesh: connected to b5e6f7a8 (10.0.1.11:7934)
mesh: connected to c9d0e1f2 (10.0.1.12:7934) No IP addresses to configure. No peer lists to maintain. No external coordination service. Gold Lapel instances find each other through the database they already share.
Auto-discovery
Mesh auto-discovery uses the database as its registry. Each instance writes its address and a heartbeat timestamp to the _goldlapel.mesh_peers table — the same schema Gold Lapel uses for all its internal objects.
SELECT * FROM _goldlapel.mesh_peers;
-- instance_id | address | last_seen
-- -------------+-----------------+---------------------
-- a3f1c2d4 | 10.0.1.10:7934 | 2026-03-29 14:22:01
-- b5e6f7a8 | 10.0.1.11:7934 | 2026-03-29 14:22:03
-- c9d0e1f2 | 10.0.1.12:7934 | 2026-03-29 14:21:58 The discovery lifecycle:
- Registration — on startup, each instance inserts or updates its row in
_goldlapel.mesh_peerswith its peer address and current timestamp - Discovery loop — every 30 seconds, each instance queries the table for peers it has not yet connected to. New peers are connected automatically
- Heartbeat — each instance refreshes its
last_seentimestamp on every discovery loop iteration - Stale cleanup — peers whose
last_seentimestamp is older than 5 minutes are removed from the table and disconnected. This handles instances that were terminated without a clean shutdown
This approach requires no manual configuration and scales naturally. Whether you have 2 instances or 20, the behavior is identical — each instance discovers and connects to all others through the shared table. Instances that scale in are discovered within 30 seconds. Instances that scale out are cleaned up within 5 minutes.
Manual peer configuration
In environments where Gold Lapel instances do not share database access — or where you prefer explicit control over the peer list — you can specify peers manually with the --mesh-peers flag.
# Explicit peer addresses — for environments without shared database access
goldlapel --upstream 'postgresql://...' --mesh --mesh-peers '10.0.1.10:7934,10.0.1.11:7934,10.0.1.12:7934' Or in TOML:
# goldlapel.toml
[mesh]
enabled = true
peers = ["10.0.1.10:7934", "10.0.1.11:7934", "10.0.1.12:7934"] When --mesh-peers is provided, database auto-discovery is skipped entirely. Gold Lapel connects directly to the listed addresses. This is useful in several situations:
- Split database access — instances that connect through different credentials or connection poolers and cannot all write to
_goldlapel.mesh_peers - Cross-region mesh — instances in different regions that share a logical database but where you want explicit control over which peers connect
- Testing and development — when you want to test mesh behavior between specific instances without auto-discovery
Manual peer configuration is live-reloadable. Update the peer list in goldlapel.toml and the changes take effect without a restart — new peers are connected, removed peers are disconnected.
How it works
GL Mesh uses iroh for its peer-to-peer networking layer — the same infrastructure that powers Gold Lapel's remote dashboard connectivity. Iroh provides encrypted, authenticated connections between peers without requiring a central server or complex network configuration.
Peer-to-peer architecture
Each Gold Lapel instance generates a unique peer identity on first startup. When mesh is enabled, instances establish direct connections to each other using iroh's networking stack. The connections are encrypted and authenticated — only Gold Lapel instances that share the same database can form a mesh.
The mesh forms a full-mesh topology: every instance maintains a direct connection to every other instance. For typical deployments of 2-20 instances, this is efficient and simple. The connection overhead is minimal — each peer connection consumes a single lightweight stream.
Invalidation flow
When Gold Lapel detects a write — whether through WAL logical decoding, NOTIFY triggers, or direct observation through the proxy — the invalidation flow proceeds in two steps:
- Local invalidation — the instance that detected the write immediately invalidates the affected entries in its own L1 and L2 cache
- Broadcast — a compact invalidation message (table name and affected key ranges) is sent to all connected peers. Each peer invalidates its own local cache entries for the affected data
mesh: write detected on table "orders" — invalidating L1/L2 cache
mesh: broadcasting invalidation to 2 peers
mesh: peer b5e6f7a8 acknowledged invalidation (1.2ms)
mesh: peer c9d0e1f2 acknowledged invalidation (2.4ms) Invalidation messages are small — typically under 100 bytes — and are delivered over the existing peer connections with no additional round-trips. The entire broadcast-to-acknowledgment cycle typically completes in single-digit milliseconds on a local network.
Consistency model
Mesh invalidation is best-effort with strong guarantees in practice. If a peer is temporarily unreachable, the invalidation message for that peer is dropped — but the peer's own write detection (WAL or NOTIFY) will catch the change independently, typically within the next polling interval. Mesh accelerates invalidation; it does not replace the underlying write detection mechanisms.
This design is intentional. A cache invalidation system that blocks writes when a peer is unreachable would trade availability for consistency — the wrong trade-off for a caching layer. Gold Lapel's approach provides millisecond-level coherence when all peers are connected, and graceful degradation to second-level coherence when a peer is temporarily unreachable.
Dashboard
The Gold Lapel dashboard displays mesh status when mesh is enabled. You will see:
- Peer count — how many peers are currently connected
- Per-peer details — each peer's address, connection status, and latency
- Invalidation stats — total invalidations sent and received since startup
- Mesh health — connection status indicators for each peer
The same information is available programmatically via the /api/stats endpoint:
# Check mesh status via the API
curl -s http://localhost:7933/api/stats | jq '.mesh'
# Example response:
# {
# "enabled": true,
# "peer_id": "a3f1c2d4",
# "peers": [
# { "id": "b5e6f7a8", "address": "10.0.1.11:7934", "connected": true, "latency_ms": 1.2 },
# { "id": "c9d0e1f2", "address": "10.0.1.12:7934", "connected": true, "latency_ms": 2.4 }
# ],
# "invalidations_sent": 1847,
# "invalidations_received": 3291
# } If you are running the remote dashboard, mesh status from all connected instances is aggregated into a single view — giving you a complete picture of cache coherence across your entire deployment.
Troubleshooting
Peers not discovering each other
If instances are not finding each other via auto-discovery, verify that they are connecting to the same PostgreSQL database and that the _goldlapel schema is accessible to all instances. Check the _goldlapel.mesh_peers table directly — each instance should have a row with a recent last_seen timestamp.
If instances use different database credentials, confirm that all credentials have read/write access to the _goldlapel schema. Gold Lapel creates this schema on first startup if it has sufficient permissions.
Peers discovered but not connecting
If peers appear in the _goldlapel.mesh_peers table but the startup log shows connection failures, the most common cause is a firewall blocking the mesh port.
# Default mesh port: 7934
# Ensure this port is open between all GL instances
# Example: allow mesh traffic on a private subnet
sudo ufw allow from 10.0.1.0/24 to any port 7934 The mesh port defaults to 7934 (one port above the default dashboard port of 7933). Ensure this port is open for TCP traffic between all Gold Lapel instances. In containerized environments, this means the mesh port must be exposed in your container configuration and allowed by your network policy.
In cloud environments with security groups (AWS, GCP, Azure), add an inbound rule allowing TCP 7934 from the security group or subnet that your Gold Lapel instances occupy.
High invalidation latency
Invalidation latency is dominated by network latency between instances. On a local network or within the same availability zone, expect sub-5ms round-trips. Cross-region mesh will naturally have higher latency — a message between us-east-1 and eu-west-1 will take 70-80ms regardless of the transport.
If latency is unexpectedly high within the same network, check for network congestion or misconfigured routing. The /api/stats endpoint reports per-peer latency for diagnosis.
Instance terminated without clean shutdown
If an instance is killed (SIGKILL, node failure, container eviction), it cannot deregister itself from the mesh. The remaining instances will attempt to connect to the stale peer until the 5-minute cleanup removes it from the _goldlapel.mesh_peers table. During this window, invalidation messages to the dead peer will fail silently — the other peers continue operating normally.
This is expected behavior and requires no intervention. The stale entry is cleaned up automatically.
Config reference
| Flag | TOML | Env var | Default | Description |
|---|---|---|---|---|
--mesh | mesh.enabled | GOLDLAPEL_MESH | false | Enable mesh peer discovery and P2P cache invalidation |
--mesh-peers | mesh.peers | GOLDLAPEL_MESH_PEERS | (auto-discover) | Comma-separated list of peer addresses (host:port). Overrides database auto-discovery |
Both settings are live-reloadable. You can enable or disable mesh and update the peer list in goldlapel.toml without restarting the proxy.
Mesh is one of those features where the setup effort is so small relative to the operational benefit that there is little reason not to enable it in any multi-instance deployment. A single flag. Millisecond cache coherence. No infrastructure to manage. Your instances will coordinate themselves — you have more important things to attend to.