Backup and restore
z4j ships two CLI commands for database snapshots:
z4j backup --output PATH- point-in-time snapshot to a single filez4j restore PATH --force- restore from a backup file
Both auto-detect the backend from Z4J_DATABASE_URL:
| Backend | Backup mechanism | Restore mechanism |
|---|---|---|
| SQLite | VACUUM INTO (online; brain keeps serving) | File copy (brain MUST be stopped) |
| PostgreSQL | pg_dump -Fc -Z6 (online) | pg_restore --clean --if-exists |
These commands handle the database only. For a full disaster-recovery plan you also need ~/.z4j/secret.env (auto-minted secrets - the only copy of Z4J_SECRET and Z4J_SESSION_SECRET for that install) and ~/.z4j/allowed-hosts (operator-managed). See What else to back up.
SQLite backup
Section titled “SQLite backup”z4j backup --output /var/backups/z4j-$(date +%Y-%m-%d).dbz4j keeps serving requests during the backup - VACUUM INTO produces a consistent snapshot via SQLite’s online backup API. The output is a fully self-contained SQLite file (no WAL, no journal needed alongside).
Output:
z4j: backup complete backend: sqlite output: /var/backups/z4j-2026-04-24.db size: 12.34 MiBz4j: move this file off-host (scp, rclone, S3, ...) for true disaster recovery.A backup left on the same host as the original is half a backup. Push it off-host immediately:
# rclone to S3rclone copy /var/backups/z4j-2026-04-24.db s3:my-backups/z4j/
# scp to a backup serverscp /var/backups/z4j-2026-04-24.db backup-host:/srv/z4j-backups/
# Restic / Borg / your existing backup tooling - just give it the filePostgreSQL backup
Section titled “PostgreSQL backup”z4j backup --output /var/backups/z4j-$(date +%Y-%m-%d).dumpRequires pg_dump on the operator’s PATH (apt install postgresql-client on Debian/Ubuntu; brew install libpq && brew link --force libpq on macOS).
Uses pg_dump -Fc -Z6 --no-owner --no-acl:
-Fc- custom format (compressible, selective-restore-able)-Z6- gzip compression level 6--no-owner --no-acl- portable across environments (don’t bake in roles)
z4j keeps serving requests; pg_dump is a normal connection. Output looks the same as the SQLite case.
SQLite restore
Section titled “SQLite restore”Stop z4j first. SQLite needs an exclusive write lock to be replaced safely:
sudo systemctl stop z4jz4j restore /var/backups/z4j-2026-04-24.db --forcesudo systemctl start z4j--force is required - the CLI refuses without it as a safety against accidentally restoring on top of a live install. The existing DB (if any) is moved to <dbpath>.pre-restore-bak so you can roll back manually:
# If something is wrong after restore:sudo systemctl stop z4jmv ~/.z4j/z4j.db ~/.z4j/z4j.db.badmv ~/.z4j/z4j.db.pre-restore-bak ~/.z4j/z4j.dbsudo systemctl start z4jAfter restart, verify with z4j check && z4j status.
PostgreSQL restore
Section titled “PostgreSQL restore”Same shape:
sudo systemctl stop z4jz4j restore /var/backups/z4j-2026-04-24.dump --forcesudo systemctl start z4jUses pg_restore --clean --if-exists --no-owner --no-acl so the restore is idempotent against a partially-populated target DB.
For Postgres you can also restore to a different target (point Z4J_DATABASE_URL at a fresh DB and call z4j restore). Useful for staging-from-prod and disaster-recovery dry runs.
Verifying after a restore
Section titled “Verifying after a restore”z4j check # config + DB connectivity + alembic at headz4j status # row counts: users, projects, agents, tasks, auditz4j audit verify # walk the HMAC audit chain end-to-endIf audit verify fails after a restore, you have an integrity issue - usually the source backup was taken on a different brain instance with a different Z4J_SECRET. The audit chain is signed with the master HMAC; restoring rows from one install into another with a different secret breaks the chain.
Scheduled backups
Section titled “Scheduled backups”The CLI is designed for cron / systemd timer usage. Sample systemd timer (Debian/Ubuntu):
[Unit]Description=z4j daily backupAfter=z4j.service
[Service]Type=oneshotUser=z4jEnvironment=Z4J_DATABASE_URL=sqlite+aiosqlite:////srv/z4j/.z4j/z4j.dbExecStart=/srv/venv/bin/z4j backup --output /var/backups/z4j-%Y-%m-%d.dbExecStartPost=/usr/bin/find /var/backups -name 'z4j-*.db' -mtime +14 -delete[Unit]Description=Run z4j backup daily
[Timer]OnCalendar=dailyPersistent=true
[Install]WantedBy=timers.targetsudo systemctl enable --now z4j-backup.timersudo systemctl list-timers z4j-backup.timerFor Docker, run the backup command via docker exec from the host’s cron:
0 3 * * * docker exec z4j z4j backup --output /backups/z4j-$(date +\%Y-\%m-\%d).db && find /backups -name 'z4j-*.db' -mtime +14 -deleteWhat else to back up
Section titled “What else to back up”z4j DB is the bulk of your state, but a complete restore needs:
| Path | What it carries | How often it changes |
|---|---|---|
| DB (SQLite file or Postgres) | All operational state - users, agents, tasks, audit chain, schedules | Continuous |
~/.z4j/secret.env (SQLite/pip) | Auto-minted Z4J_SECRET + Z4J_SESSION_SECRET - the only copy unless you set them via env | Once per install (immutable) |
~/.z4j/allowed-hosts | Operator-managed Host allow-list | When you add/remove hosts |
| Agent tokens (in your apps) | Bearer tokens minted from the dashboard | When you mint/rotate |
.env / Docker compose.yml | Whatever you set Z4J_* env vars to | When you change config |
If you set Z4J_SECRET + Z4J_SESSION_SECRET explicitly via env vars (recommended for production), ~/.z4j/secret.env doesn’t exist and you only need the env-vars stored in your secret manager. The CLI’s z4j backup covers the DB; you’re responsible for the env / secret store.
See also
Section titled “See also”z4j doctor- pre-flight check that warns when~/.z4j/secret.envexists (i.e. auto-minted, please back it up)- Upgrade and rollback - take a backup before every upgrade
- Incident response - using a backup to recover from compromise