Quickstart
Step-by-step guide from clone to first API call.
This guide covers the full local development environment: infrastructure services via Docker Compose, environment variable configuration, database setup, and the Makefile workflow.
| Tool | Version | Install |
|---|---|---|
| Node.js | 20+ (LTS) | nodejs.org or nvm install 20 |
| pnpm | 10+ | npm install -g pnpm@10 |
| Docker | Desktop or OrbStack | docker.com or orbstack.dev |
| Wrangler CLI | 4+ | npm install -g wrangler |
Start all infrastructure services with a single command:
docker compose up -d| Service | Image | Port | Purpose |
|---|---|---|---|
| Postgres | pgvector/pgvector:pg17 | 5432 | Primary database (Neon-compatible) |
| TigerBeetle | Custom (see infra/docker/tigerbeetle/) | 8080 | Double-entry accounting ledger |
| Valkey | valkey/valkey:8-bookworm | 6379 | KV cache / session store (Redis-compatible) |
| SpiceDB | authzed/spicedb:v1.38.1 | 50051 (gRPC), 8443 (HTTP) | Zanzibar-style RBAC authorization |
| Grafana | grafana/grafana:latest | 3333 | Monitoring dashboards |
| Metabase | metabase/metabase:latest | 3334 | BI / SQL analytics |
| Documenso | documenso/documenso:latest | 3335 | E-signature workflows |
| Drizzle Studio | ghcr.io/drizzle-team/gateway:latest | 4983 | Visual database browser |
| Mailpit | axllent/mailpit:latest | 8025 (UI), 1025 (SMTP) | Email capture and testing |
Once services are running, use these connection strings in your .env:
DATABASE_URL=postgresql://openinsure:openinsure@localhost:5432/openinsureTIGERBEETLE_URL=http://localhost:8080TIGERBEETLE_API_KEY=localdevSPICEDB_GRPC_PRESHARED_KEY=localdev# Start all servicesdocker compose up -d
# Start specific services onlydocker compose up -d postgres tigerbeetle redis
# View logs (all services)docker compose logs -f
# View logs (single service)docker compose logs -f tigerbeetle
# Stop all services (keep data)docker compose down
# Stop + wipe all data (fresh start)docker compose down -vPostgres runs with pgvector enabled (Postgres 17). An init script at infra/docker/postgres-init.sql creates the openinsure and documenso databases on first start. Data persists in the pg_data Docker volume.
TigerBeetle runs in single-replica mode with a custom Docker build (infra/docker/tigerbeetle/Dockerfile). It requires privileged: true for io_uring support. The HTTP API proxy on port 8080 matches the interface the API workers expect in production.
SpiceDB runs with an in-memory datastore (no persistence). The schema is loaded from packages/auth/spice/schema.zed. The gRPC preshared key is localdev.
Mailpit captures all outbound SMTP on port 1025. Open http://localhost:8025 to view captured emails (OTP codes, policy documents, notifications).
Copy the root .env.example:
cp .env.example .envCopy the API worker dev vars:
cp apps/api/.dev.vars.example apps/api/.dev.varsSet required variables in .env:
DATABASE_URL=postgresql://openinsure:openinsure@localhost:5432/openinsureJWT_SECRET=your-secret-at-least-32-chars-longAPI_SECRET=your-api-secretSERVICE_SECRET=your-service-secretMatch .dev.vars to your .env (especially API_SECRET, JWT_SECRET, and DB connection).
| Variable | Description |
|---|---|
DATABASE_URL | Postgres connection string (used by Drizzle migrations) |
JWT_SECRET | API JWT signing secret (min 32 chars) |
API_SECRET | System bearer secret for machine/demo flows |
SERVICE_SECRET | Service token exchange secret |
PORTAL_SECRET | Producer portal token exchange secret |
ADMIN_SECRET | Admin token exchange secret |
AUTH_URL | Auth Worker base URL |
CLOUDFLARE_ACCOUNT_ID | Required for Wrangler and deploy tooling |
CLOUDFLARE_API_TOKEN | Required for Wrangler and deploy tooling |
# Apply all Drizzle migrations against DATABASE_URLpnpm --filter @openinsure/db db:migrate
# Or via Makemake db-migrateThis runs the squashed bootstrap migration plus all forward Drizzle migrations.
# Load base rate tables (recommended for local dev)pnpm db:seed:rating
# Or via Makemake db-seedAfter modifying schemas in packages/db/src/schema/:
make db-generateThe underwriting workbench uses a separate Cloudflare D1 database (oi-submissions):
cd apps/underwriting-workbenchnpx wrangler d1 migrations apply oi-submissions --localOpen http://localhost:4983 in your browser. The master password is localdev.
The Makefile in the repo root provides shortcuts for common operations:
| Command | Description |
|---|---|
make dev | Start all dev servers (Turborepo) |
make dev-api | Start API worker only |
make dev-workbench | Start underwriting workbench only |
make dev-portal | Start policyholder portal only |
make dev-producer | Start producer portal only |
make dev-admin | Start admin dashboard only |
| Command | Description |
|---|---|
make test | Run all tests (pnpm turbo test) |
make test-api | Run API tests only |
make test-watch | Vitest watch mode |
make test-coverage | Run tests with V8 coverage enforcement |
make lint | Lint check (Biome) |
make format | Auto-fix lint (Biome, no formatting — oxfmt owns formatting) |
make typecheck | Type-check all packages |
make ci | Full local CI: lint + typecheck + test + pages build |
| Command | Description |
|---|---|
make db-migrate | Run database migrations |
make db-generate | Generate Drizzle schema types |
make db-seed | Seed rating data |
| Command | Description |
|---|---|
make build | Build all packages and apps |
make ship | Commit, push, and deploy all apps |
make ship-api | Deploy API worker only |
make ship-workbench | Deploy underwriting workbench only |
make clean | Remove build artifacts and caches |
When running make dev, each app starts on its own port:
| App | URL | Filter |
|---|---|---|
| API Worker | http://localhost:8787 | @openinsure/api |
| Underwriting Workbench | http://localhost:3000 | @openinsure/underwriting-workbench |
| Producer Portal | http://localhost:3001 | @openinsure/producer-portal |
| Admin Dashboard | http://localhost:3002 | @openinsure/admin |
| Policyholder Portal | http://localhost:3003 | @openinsure/portal |
| Ops Workbench (Vite SPA) | http://localhost:5173 | @openinsure/workbench |
| Astro Docs | http://localhost:4321 | @openinsure/docs |
| Finance Portal | http://localhost:3006 | @openinsure/finance-portal |
| Compliance Portal | http://localhost:3007 | @openinsure/compliance-portal |
OpenInsure uses two tools with distinct responsibilities:
.oxfmtrc.json.biome.json.# Format all filesoxfmt .
# Check formatting (CI gate)oxfmt --check .
# Lint with auto-fixbiome check --write .
# Lint check only (no writes)biome check .On every commit, Husky runs:
biome check --write — lint auto-fixoxfmt --write — formatting (runs last, gets final word).env, secrets, API keysany type usageFor local API testing, exchange API_SECRET for a short-lived superadmin JWT:
curl -s -X POST http://localhost:8787/auth/demo \ -H "Content-Type: application/json" \ -d '{"secret":"'"$API_SECRET"'"}' | jq -r '.token'Export the token for subsequent calls:
export TOKEN="<paste-token>"
# Test the APIcurl -s http://localhost:8787/v1/submissions \ -H "Authorization: Bearer $TOKEN" | jq '.'TigerBeetle requires io_uring which needs privileged: true in the Docker Compose config. On OrbStack this works automatically. On Docker Desktop for Mac, ensure you have the latest version and that the Linux VM has sufficient resources (4 GB+ RAM recommended).
If port 3000 is already in use (e.g., by another dev tool), start individual apps on custom ports:
PORT=3003 pnpm --filter @openinsure/workbench devSpiceDB uses an in-memory datastore — schema must be re-applied after every restart:
# Load the authorization schemazed schema write packages/auth/spice/schema.zed \ --endpoint=localhost:50051 \ --insecure \ --token=localdevWhen things get stuck, wipe everything and start over:
docker compose down -v # Remove all containers + volumespnpm install # Reinstall dependenciesmake db-migrate # Re-run migrationsdocker compose up -d # Restart servicesmake dev # Start dev serversQuickstart
Step-by-step guide from clone to first API call.
Testing
Vitest setup, Playwright E2E, mocking patterns, and coverage.
CI/CD
How the CircleCI pipeline runs format, lint, test, build, deploy.
Database Migrations
Drizzle migration workflow, squash strategy, and production rollout.