Developer Setup
Getting the full Leadmetrics stack running locally. Budget 15–20 minutes for a first-time setup.
Prerequisites
| Tool | Version | Notes |
|---|---|---|
| Node.js | 22.x | Use nvm or fnm to manage versions |
| pnpm | 9.x | npm install -g pnpm |
| Docker Desktop | Latest | Must be running before starting any service |
| Git | Any | — |
1. Clone and Install
git clone <repo-url> leadmetrics-v3
cd leadmetrics-v3
pnpm installpnpm install installs all workspace packages in one pass. Do not run npm install — it will break the workspace links.
2. Environment Files
Each app and server needs its own .env file. None are committed — copy from .env.example as a starting point.
# API (most env vars live here)
cp .env.example apps/api/.env
# Worker servers — at minimum DATABASE_URL + REDIS_URL
cp .env.example apps/servers/agents/.env
cp .env.example apps/servers/billing/.env
cp .env.example apps/servers/notifications/.env
cp .env.example apps/servers/reporting/.env
cp .env.example apps/servers/ragengine/.env
cp .env.example apps/servers/search-indexer/.env
# Apps — only need NEXT_PUBLIC_APP_URL and API URL
cp .env.example apps/dashboard/.env.local
cp .env.example apps/manage/.env.local
cp .env.example apps/dm/.env.localSee Environment Variables for the full reference of every variable and which service needs it.
3. Start Docker Infrastructure
From the repo root:
docker compose up -dThis starts eight containers:
| Container | Port | Purpose |
|---|---|---|
leadmetrics-postgres | 5434 | Primary PostgreSQL |
leadmetrics-mongo | 27017 | MongoDB (audit logs, agent outputs) |
leadmetrics-redis | — (internal) | Redis for Leadmetrics BullMQ |
leadmetrics-qdrant | 6333/6334 | Vector DB for RAG |
leadmetrics-typesense | 8108 | Typesense search engine |
ragmanager-postgres | 5433 | RAG manager PostgreSQL |
ragmanager-redis | 6379 | Redis exposed to host (apps connect here) |
ragmanager-qdrant | 6333/6334 | Qdrant for RAG manager |
Verify all containers are healthy:
docker ps --format "table {{.Names}}\t{{.Status}}"Redis mapping check: leadmetrics-redis must be port-mapped. If docker ps shows 6379/tcp without -> 6379, recreate with docker compose up -d.
4. Set Up the Database
Run once after the first clone, or any time you need a clean slate:
# Drop and recreate the database
docker exec leadmetrics-postgres psql -U leadmetrics -d postgres -c "DROP DATABASE IF EXISTS leadmetrics;"
docker exec leadmetrics-postgres psql -U leadmetrics -d postgres -c "CREATE DATABASE leadmetrics;"
# Push schema (from packages/db)
cd packages/db
DATABASE_URL="postgresql://leadmetrics:leadmetrics@localhost:5434/leadmetrics" pnpm db:push
cd ../..
# Seed (reads DATABASE_URL from apps/api/.env)
cd packages/db
pnpm db:seed
cd ../..Seed accounts:
| Password | Role | |
|---|---|---|
superadmin@leadmetrics.ai | admin | Super admin (Manage portal) |
reviewer@leadmetrics.ai | reviewer | DM reviewer (DM Portal) |
5. Kill Any Stale Node Processes
Old node processes from prior sessions hold ports and return misleading errors. Clear them before starting:
npx kill-port 3000 3001 3002 3003Verify: netstat -ano | findstr "300[0-3]" should be empty.
6. Start All Services
Open a terminal for each service (or use a process manager like tmux / pm2):
| Terminal | Command | URL |
|---|---|---|
| 1 | cd apps/dashboard && pnpm dev | http://localhost:3000 |
| 2 | cd apps/manage && pnpm dev | http://localhost:3001 |
| 3 | cd apps/dm && pnpm dev | http://localhost:3002 |
| 4 | cd apps/api && pnpm dev | http://localhost:3003 |
| 5 | cd apps/knowledgebase && pnpm dev | http://localhost:3004 |
| 6 | cd apps/servers/agents && pnpm dev | (no HTTP port) |
| 7 | cd apps/servers/billing && pnpm dev | (no HTTP port) |
| 8 | cd apps/servers/notifications && pnpm dev | (no HTTP port) |
| 9 | cd apps/servers/reporting && pnpm dev | (no HTTP port) |
| 10 | cd apps/servers/ragengine && pnpm dev | (no HTTP port) |
| 11 | cd apps/servers/search-indexer && pnpm dev | (no HTTP port) |
Manage portal note: Uses --turbopack flag in package.json — required on Windows to avoid an SWC worker crash on the tenant detail page.
Knowledge base first compile: ~3.5 minutes (Nextra builds a webpack context for ~289 docs files). Subsequent hot reloads are fast.
7. Verify the Stack is Up
- Open http://localhost:3000 → you should see the Dashboard login page.
- Log in with
moble@leadmetrics.ai/password@123(your dev account, added after seeding). - Open http://localhost:3003/health → should return
{ "status": "ok" }.
Triggering Reports Manually
Reporting workers use cron jobs. To trigger them immediately in dev:
cd apps/servers/reporting
tsx --env-file .env trigger.tsRequires the notifications server to be running first. If re-running on the same calendar day, clear stale BullMQ dedup keys first:
docker exec ragmanager-redis redis-cli DEL bull:notifications__email:failed
docker exec ragmanager-redis redis-cli DEL bull:notifications__email:completedGenerating the Prisma Client
After any schema change in packages/db/prisma/schema.prisma:
# Kill node processes first (Windows EPERM issue)
npx kill-port 3000 3001 3002 3003
cd packages/db
pnpm db:generateSee Prisma Stale Client if you hit EPERM errors on Windows.
Running Tests
# Unit tests across all packages
pnpm test:unit
# Integration tests (requires a running API and test database)
pnpm test:integration
# E2E tests (requires all services running)
cd apps/dashboard && pnpm test:e2eTest databases are separate from dev: api_test, manage_test. See Integration Test DB Setup for full setup.
Common Issues
| Symptom | Likely cause | Fix |
|---|---|---|
| API returns 500 on login | redisPlugin not registered in index.ts | Ensure await fastify.register(redisPlugin) runs before routers |
| Port already in use | Stale node process | npx kill-port 3000 3001 3002 3003 |
EPERM: operation not permitted on db:generate | Stale Prisma engine process | Kill all node.exe processes, then re-run |
| Social post designer silently fails | Missing DO_SPACES env vars | Add all 6 DO_SPACES_* vars to agents .env |
| Socket.IO events not received on Windows | Browser uses 127.0.0.1, not localhost | Use http://127.0.0.1:3000 — both are now in the CORS allowlist |