Skip to content

Local Development

This guide covers the full local development environment: infrastructure services via Docker Compose, environment variable configuration, database setup, and the Makefile workflow.


ToolVersionInstall
Node.js20+ (LTS)nodejs.org or nvm install 20
pnpm10+npm install -g pnpm@10
DockerDesktop or OrbStackdocker.com or orbstack.dev
Wrangler CLI4+npm install -g wrangler

Start all infrastructure services with a single command:

Terminal window
docker compose up -d
ServiceImagePortPurpose
Postgrespgvector/pgvector:pg175432Primary database (Neon-compatible)
TigerBeetleCustom (see infra/docker/tigerbeetle/)8080Double-entry accounting ledger
Valkeyvalkey/valkey:8-bookworm6379KV cache / session store (Redis-compatible)
SpiceDBauthzed/spicedb:v1.38.150051 (gRPC), 8443 (HTTP)Zanzibar-style RBAC authorization
Grafanagrafana/grafana:latest3333Monitoring dashboards
Metabasemetabase/metabase:latest3334BI / SQL analytics
Documensodocumenso/documenso:latest3335E-signature workflows
Drizzle Studioghcr.io/drizzle-team/gateway:latest4983Visual database browser
Mailpitaxllent/mailpit:latest8025 (UI), 1025 (SMTP)Email capture and testing

Once services are running, use these connection strings in your .env:

Terminal window
DATABASE_URL=postgresql://openinsure:openinsure@localhost:5432/openinsure
TIGERBEETLE_URL=http://localhost:8080
TIGERBEETLE_API_KEY=localdev
SPICEDB_GRPC_PRESHARED_KEY=localdev
Terminal window
# Start all services
docker compose up -d
# Start specific services only
docker compose up -d postgres tigerbeetle redis
# View logs (all services)
docker compose logs -f
# View logs (single service)
docker compose logs -f tigerbeetle
# Stop all services (keep data)
docker compose down
# Stop + wipe all data (fresh start)
docker compose down -v

Postgres runs with pgvector enabled (Postgres 17). An init script at infra/docker/postgres-init.sql creates the openinsure and documenso databases on first start. Data persists in the pg_data Docker volume.

TigerBeetle runs in single-replica mode with a custom Docker build (infra/docker/tigerbeetle/Dockerfile). It requires privileged: true for io_uring support. The HTTP API proxy on port 8080 matches the interface the API workers expect in production.

SpiceDB runs with an in-memory datastore (no persistence). The schema is loaded from packages/auth/spice/schema.zed. The gRPC preshared key is localdev.

Mailpit captures all outbound SMTP on port 1025. Open http://localhost:8025 to view captured emails (OTP codes, policy documents, notifications).


  1. Copy the root .env.example:

    Terminal window
    cp .env.example .env
  2. Copy the API worker dev vars:

    Terminal window
    cp apps/api/.dev.vars.example apps/api/.dev.vars
  3. Set required variables in .env:

    Terminal window
    DATABASE_URL=postgresql://openinsure:openinsure@localhost:5432/openinsure
    JWT_SECRET=your-secret-at-least-32-chars-long
    API_SECRET=your-api-secret
    SERVICE_SECRET=your-service-secret
  4. Match .dev.vars to your .env (especially API_SECRET, JWT_SECRET, and DB connection).

VariableDescription
DATABASE_URLPostgres connection string (used by Drizzle migrations)
JWT_SECRETAPI JWT signing secret (min 32 chars)
API_SECRETSystem bearer secret for machine/demo flows
SERVICE_SECRETService token exchange secret
PORTAL_SECRETProducer portal token exchange secret
ADMIN_SECRETAdmin token exchange secret
AUTH_URLAuth Worker base URL
CLOUDFLARE_ACCOUNT_IDRequired for Wrangler and deploy tooling
CLOUDFLARE_API_TOKENRequired for Wrangler and deploy tooling

Terminal window
# Apply all Drizzle migrations against DATABASE_URL
pnpm --filter @openinsure/db db:migrate
# Or via Make
make db-migrate

This runs the squashed bootstrap migration plus all forward Drizzle migrations.

Terminal window
# Load base rate tables (recommended for local dev)
pnpm db:seed:rating
# Or via Make
make db-seed

After modifying schemas in packages/db/src/schema/:

Terminal window
make db-generate

The underwriting workbench uses a separate Cloudflare D1 database (oi-submissions):

Terminal window
cd apps/underwriting-workbench
npx wrangler d1 migrations apply oi-submissions --local

Open http://localhost:4983 in your browser. The master password is localdev.


The Makefile in the repo root provides shortcuts for common operations:

CommandDescription
make devStart all dev servers (Turborepo)
make dev-apiStart API worker only
make dev-workbenchStart underwriting workbench only
make dev-portalStart policyholder portal only
make dev-producerStart producer portal only
make dev-adminStart admin dashboard only
CommandDescription
make testRun all tests (pnpm turbo test)
make test-apiRun API tests only
make test-watchVitest watch mode
make test-coverageRun tests with V8 coverage enforcement
make lintLint check (Biome)
make formatAuto-fix lint (Biome, no formatting — oxfmt owns formatting)
make typecheckType-check all packages
make ciFull local CI: lint + typecheck + test + pages build
CommandDescription
make db-migrateRun database migrations
make db-generateGenerate Drizzle schema types
make db-seedSeed rating data
CommandDescription
make buildBuild all packages and apps
make shipCommit, push, and deploy all apps
make ship-apiDeploy API worker only
make ship-workbenchDeploy underwriting workbench only
make cleanRemove build artifacts and caches

When running make dev, each app starts on its own port:

AppURLFilter
API Workerhttp://localhost:8787@openinsure/api
Underwriting Workbenchhttp://localhost:3000@openinsure/underwriting-workbench
Producer Portalhttp://localhost:3001@openinsure/producer-portal
Admin Dashboardhttp://localhost:3002@openinsure/admin
Policyholder Portalhttp://localhost:3003@openinsure/portal
Ops Workbench (Vite SPA)http://localhost:5173@openinsure/workbench
Astro Docshttp://localhost:4321@openinsure/docs
Finance Portalhttp://localhost:3006@openinsure/finance-portal
Compliance Portalhttp://localhost:3007@openinsure/compliance-portal

OpenInsure uses two tools with distinct responsibilities:

  • oxfmt — authoritative code formatter (Rust-based, Prettier-compatible). Config: .oxfmtrc.json.
  • Biome — linter only (formatter disabled). Config: biome.json.
Terminal window
# Format all files
oxfmt .
# Check formatting (CI gate)
oxfmt --check .
# Lint with auto-fix
biome check --write .
# Lint check only (no writes)
biome check .

On every commit, Husky runs:

  1. biome check --write — lint auto-fix
  2. oxfmt --write — formatting (runs last, gets final word)
  3. Credential check — blocks .env, secrets, API keys
  4. No-any check — blocks any type usage

For local API testing, exchange API_SECRET for a short-lived superadmin JWT:

Terminal window
curl -s -X POST http://localhost:8787/auth/demo \
-H "Content-Type: application/json" \
-d '{"secret":"'"$API_SECRET"'"}' | jq -r '.token'

Export the token for subsequent calls:

Terminal window
export TOKEN="<paste-token>"
# Test the API
curl -s http://localhost:8787/v1/submissions \
-H "Authorization: Bearer $TOKEN" | jq '.'

TigerBeetle requires io_uring which needs privileged: true in the Docker Compose config. On OrbStack this works automatically. On Docker Desktop for Mac, ensure you have the latest version and that the Linux VM has sufficient resources (4 GB+ RAM recommended).

If port 3000 is already in use (e.g., by another dev tool), start individual apps on custom ports:

Terminal window
PORT=3003 pnpm --filter @openinsure/workbench dev

SpiceDB uses an in-memory datastore — schema must be re-applied after every restart:

Terminal window
# Load the authorization schema
zed schema write packages/auth/spice/schema.zed \
--endpoint=localhost:50051 \
--insecure \
--token=localdev

When things get stuck, wipe everything and start over:

Terminal window
docker compose down -v # Remove all containers + volumes
pnpm install # Reinstall dependencies
make db-migrate # Re-run migrations
docker compose up -d # Restart services
make dev # Start dev servers

Quickstart

Step-by-step guide from clone to first API call.

Testing

Vitest setup, Playwright E2E, mocking patterns, and coverage.

CI/CD

How the CircleCI pipeline runs format, lint, test, build, deploy.

Database Migrations

Drizzle migration workflow, squash strategy, and production rollout.