Features · Illustrated
What dbcrate does, in detail
The full inventory of what runs, when, and where it ends up, with diagrams, screen-grabs, and a moving picture or two.
Plate I. A small static binary that lives on the database host. It listens on no ports. It speaks outbound mTLS to the control plane for heartbeats and configuration, talks to the database it tends, and to your storage destination. It does nothing else, and it requires no root and no system service to run.
┌─ database host ─────────────────────────────┐
│ │
╔══════════╗ │ ┌──────────┐ ┌─────────────────┐ │
║ control ║◀──mTLS──│ agent │────────▶│ PostgreSQL │ │
║ plane ║ │ │ │ │ (read role) │ │
╚══════════╝ │ └────┬─────┘ └─────────────────┘ │
│ │ │
│ │ stream: pg_dump → zstd → encrypt │
│ ▼ │
│ ┌──────────┐ │
│ │ upload │──────▶ your bucket / SFTP │
│ └──────────┘ │
│ │
└─────────────────────────────────────────────┘
no inbound ports no plaintext on disk no shared keys
The agent listens on no ports. The control plane never sees backup plaintext.
On a schedule, it streams pg_dump through zstd compression and end-to-end encryption to your organisation’s public key, then uploads the ciphertext directly to storage. Nothing of size touches local disk. The agent fetches a matching pg_dump binary for the server’s major version on demand, from a signed registry — no shared system Postgres needed.
Backups go to a bucket you own and pay for, on any S3-compatible provider, or to your own SFTP destination. The control plane never holds the bytes.
your AWS account dbcrate control plane
──────────────── ─────────────────────
┌─────────────┐ ┌──────────────────┐
│ bucket │ │ metadata DB │
│ ┌───────┐ │ │ ┌─────────────┐ │
│ │ enc'd │ │ │ │ schedules │ │
│ │ 12 GB │ │ │ │ retention │ │
│ └───────┘ │ │ │ checksums │ │
│ ┌───────┐ │ │ │ audit log │ │
│ │ enc'd │ │ │ └─────────────┘ │
│ │ 12 GB │ │ │ │
│ └───────┘ │ │ no bytes here │
└─────────────┘ └──────────────────┘
↑ ↑
└── ciphertext only ─────────────┘
╔═══════════════════════════════╗
║ S3-compatible, all of them ║
╚═══════════════════════════════╝
▣ AWS S3 ▣ Backblaze B2
▣ Cloudflare R2 ▣ Hetzner OS
▣ Wasabi ▣ MinIO
▣ Scaleway ▣ Tigris
▣ DigitalOcean ▣ self-hosted
Ceph / RGW
─────────────────────────────────
one config block, one set of
credentials, one upload path.
Each backup is encrypted on the host to your organisation’s public key before it leaves the machine. The control plane never sees plaintext during a backup. The agent never holds a private key for any backup it has produced; a stolen agent cannot decrypt the archives it once made. The agent’s own identity (its mTLS keypair) lives only on the host it was enrolled on — a compromised control plane cannot impersonate it.
Credentials — database passwords, storage keys — are fetched per job, held in memory, never written to disk, and never written to the agent’s logs. Connections in to the database use a least-privilege role you create. Storage destinations are validated end-to-end before they are saved (list, write, read, delete probe).
On the roadmap, paid plans only. Scheduled restores into a clean Postgres of the matching major version, on disposable infrastructure. Integrity checks run, your validation queries run, and the result (pass, fail, or alarm) is written to the audit log and surfaced on the database’s detail page. This is the only case where the control plane decrypts a backup, and only because you asked it to.
Mon 02:00 ┌─backup────┐ [ OK ]
│ pg_dump ├──▶ encrypt ──▶ storage ░░░░░
└───────────┘ ░░░░░
░░░░░
Mon 04:30 ┌─verify────┐ (roadmap, paid) [ OK ]
│ fetch │ ░░░░░
│ decrypt ├──▶ pg_restore ──▶ ✓ ░░░░░
│ rowcount │ fk ✓ ░░░░░
│ your SQL │ q ✓ ░░░░░
└───────────┘ ░░░░░
░░░░░
audit log verify.ok db=orders hash=0192a3.. ░░░░░
░ X ░ ◀── one bar of red
─── 14-day record ──── ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ per failure, only
Drawn here as a target. The plumbing exists in the agent; the schedulers and verifier infrastructure on the control plane are the work in front of us.
On the roadmap, after verified restores. Continuous WAL streaming on top of the scheduled base backup, so the unit of recovery is a second instead of the last snapshot.
The intended shape: the agent takes a periodic base backup of the cluster (the backup you already configure), and in parallel ships WAL segments to your storage as Postgres rolls them. A restore picks a base, replays WAL up to the timestamp or LSN you choose, and stops. Target RPO is measured in seconds, set by how aggressively WAL is shipped — not by your daily schedule.
T0 ┌─ base backup ─┐
│ pg_basebackup├──▶ encrypt ──▶ storage/base/...
└───────────────┘
T0 → T_now ┌─ WAL shipping ─┐ (continuous)
│ archive_command
│ ─▶ encrypt ├──▶ storage/wal/000000010000...
│ ─▶ upload │ 000000010000...
└────────────────┘ 000000010000...
...
recovery pick base @ T_b, replay WAL until T_target, stop.
target RPO ≈ ship interval (seconds).
PITR is one continuous stream sat on top of the periodic backup. The agent already streams and encrypts; WAL shipping is the next channel through the same pipe.
Pick a backup in the dashboard, point it at a target connection, and the agent does the rest: fetch, decrypt, decompress, apply with pg_restore. The result is written to the audit log.
dispatched from dashboard
snapshot 0192a3f4 2026-04-21 12.4 GB
target orders_staging (Postgres 16.2, empty)
▸ fetching ............................. ok
▸ decrypting ........................... ok
▸ decompressing ........................ ok
▸ pg_restore --jobs=4 .................. ok
✓ restored to orders_staging in 4m 12s.
✓ audit: restore.ok actor=marcus@example.com
day ▣▣▣▣▣▣▣ · · · · · · · · · · · · · · · · · ·
wk ▣ · · · · · · ▣ · · · · · · ▣ · · · · · · ▣
mo ▣ · · · · · · · · · · · · · · · · · · · · · ▣
── kept ── ── pruned ── ── kept ──
default: 7 daily · 4 weekly · 12 monthly
override per database, in the console.
14:02:11 backup.start db=orders 14:04:38 backup.ok db=orders size=12.4G 14:04:39 retain.delete snap=0181… reason=GFS 04:31:02 verify.start db=orders ver=16 04:33:48 verify.ok db=orders rows=ok fk=ok 09:11:50 restore.start by=marcus to=staging 09:16:02 restore.ok elapsed=4m12s append-only · scoped per organisation
Every backup tool is a position on the same set of trade-offs. We take ours in writing.
| Decision | What we do | What we give up |
|---|---|---|
| Logical vs. physical | Logical (pg_dump) in v1. PITR (base backup + continuous WAL) is next, after verified restores. |
Sub-second RPO until PITR ships. |
| Where data lives | Your bucket or SFTP, your region. | The convenience of a one-click managed bucket. |
| Where keys live | Your organisation. The agent encrypts to the org public key and never holds the org private key. | Recovering from losing both your private key and your stored backups. |
| Verification | A roadmap feature, paid plans only. | A simpler product if we never ship it. |
| Audit | Everything consequential, append-only. | The freedom not to write some of it down. |