Quick start
Get Backstop running and protecting a local PostgreSQL database in under 5 minutes using Docker Compose.
This guide gets you from zero to a running Backstop stack with a live interception and validated recovery demo. You need Docker and Docker Compose installed.
Prerequisites
- Docker 24+ and Docker Compose v2
- Python 3.9+ or Node.js 18+ (depending on which SDK you want to try)
- 4 GB of free RAM
Step 1 — Clone and start services
git clone https://github.com/pratyush2514/Backstop.git
cd Backstop
npm install
npm run demo
npm run e2enpm run demo prints the shortest local happy path. npm run e2e starts the full stack, verifies health, blocks database-level destruction, approves a table-level destructive query, restores the table, validates the restore, and prints a JSON report.
If you want to start services manually, run:
docker compose -f deploy/docker-compose.yml up -d --buildThe local stack starts five services:
| Service | Port | Purpose |
|---|---|---|
backstop-gateway | 8080 | SQL proxy and policy enforcement |
backstop-sync | 9091 (metrics) | Table snapshots and bypass detection |
postgres | 5433 | Your protected database |
minio | 9000 | S3-compatible snapshot storage |
prometheus | 9090 | Metrics collection |
Check that everything is healthy:
$ docker compose -f deploy/docker-compose.yml ps NAME STATUS PORTS backstop-gateway running 0.0.0.0:8080->8080/tcp backstop-sync running postgres running 0.0.0.0:5433->5432/tcp minio running 0.0.0.0:9000->9000/tcp prometheus running 0.0.0.0:9090->9090/tcp
$ curl http://localhost:8080/health
{"status":"ok","gateway":"running","sidecar":"healthy"}
Step 2 — Seed the database
docker compose -f deploy/docker-compose.yml exec postgres psql -U postgres -d testdb -f /docker-entrypoint-initdb.d/seed.sqlThe Compose stack also loads this seed file on first boot. Re-running it is useful when you reset the local volume.
Step 3 — Install the SDK
- Python SDK
pip install backstop - Node.js SDK
npm install @backstop/client
Step 4 — Run your first protected query
Python
import psycopg2
import backstop
raw_conn = psycopg2.connect("postgresql://postgres:password@localhost:5433/testdb")
db = backstop.guard(
conn=raw_conn,
storage="s3://backstop-test@http://localhost:9000",
actor="quickstart-demo",
mode="protect",
)
# This is safe — executes immediately
rows = db.execute("SELECT COUNT(*) FROM users")
print(f"Users: {rows[0][0]}") # → Users: 50000
# This is CRITICAL — gets intercepted and snapshotted
try:
db.execute("DROP TABLE users")
except backstop.PolicyBlockedError as e:
print(f"Blocked: {e.risk_level} — {e.policy_reason}")Node.js
import { BackstopClient } from "@backstop/client";
const backstop = new BackstopClient({
url: "http://localhost:8080",
token: "dev-token",
agentId: "quickstart-demo",
});
// Safe — executes immediately
const count = await backstop.query("SELECT COUNT(*) FROM users");
console.log("Users:", count.rows?.[0]);
// Critical — intercepted
const result = await backstop.executeQuery("DROP TABLE users");
console.log("Status:", result.status); // "blocked" or "approval_required"
console.log("Risk:", result.risk_level); // "CRITICAL"
console.log("Snapshot:", result.snapshot_id); // "snap_a3f9..."Step 5 — See the audit trail
curl http://localhost:8080/metadata/audit | jq '.events[] | {sql, risk_level, status, agent_id}'You'll see every query — safe and critical — with full classification metadata.
Step 6 — Recover from snapshot
For humans, use the guided recovery workflow first:
backstop recover \
--db postgresql://postgres:password@localhost:5433/testdb \
--storage s3://backstop-test@http://localhost:9000 \
--table usersThe wizard lists valid checksummed snapshots, restores to users_recovered by default, runs validation automatically, and prints copyback SQL only if validation passes. Existing commands such as backstop restore, backstop restore-validate, and backstop restore-copyback-plan remain available for automation.
For database-level recovery proof, run:
npm run e2e:pitrThat drill validates real local PostgreSQL base-backup restore plus WAL replay. It complements table snapshots; it does not make table snapshots a replacement for PostgreSQL PITR.
What to do next
- Read How it works to understand the safety model
- Explore Risk classification to understand all five risk levels
- Set up Policy configuration for production behavior
- Connect your real AI agent via the Python SDK or Node.js SDK