Infrastructure — Docker, Coolify & Deployment
Local Development — Docker Compose
Two compose files cover the two common development modes:
| File | Use when |
|---|---|
docker-compose.yml | Day-to-day development — infra only; apps run via pnpm dev |
docker-compose.apps.yml | Full-stack local run — all apps containerised; no Node.js required |
docker-compose.yml — Infrastructure services
version: "3.9"
services:
# ── Relational database ──────────────────────────────────────
postgres:
image: postgres:16-alpine
container_name: leadmetrics-postgres
restart: unless-stopped
environment:
POSTGRES_DB: leadmetrics
POSTGRES_USER: leadmetrics
POSTGRES_PASSWORD: leadmetrics
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U leadmetrics"]
interval: 5s
timeout: 5s
retries: 5
# ── Document database ────────────────────────────────────────
mongo:
image: mongo:7
container_name: leadmetrics-mongo
restart: unless-stopped
environment:
MONGO_INITDB_ROOT_USERNAME: leadmetrics
MONGO_INITDB_ROOT_PASSWORD: leadmetrics
volumes:
- mongo_data:/data/db
ports:
- "27017:27017"
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 5
# ── Queue + pub/sub ──────────────────────────────────────────
redis:
image: redis:7-alpine
container_name: leadmetrics-redis
restart: unless-stopped
command: redis-server --appendonly yes
volumes:
- redis_data:/data
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
# ── Vector database (RAG) ────────────────────────────────────
qdrant:
image: qdrant/qdrant:v1.9.1 # pin — never use latest in staging/prod
container_name: leadmetrics-qdrant
restart: unless-stopped
volumes:
- qdrant_data:/qdrant/storage
ports:
- "6333:6333" # HTTP API
- "6334:6334" # gRPC API
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:6333/readyz || exit 1"]
interval: 10s
timeout: 5s
retries: 5
# ── Local LLM ────────────────────────────────────────────────
ollama:
image: ollama/ollama:0.4.1 # pin — never use latest in staging/prod
container_name: leadmetrics-ollama
restart: unless-stopped
volumes:
- ollama_data:/root/.ollama
ports:
- "11434:11434"
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:11434/ || exit 1"]
interval: 10s
timeout: 5s
retries: 5
# GPU support — uncomment if running on a machine with NVIDIA GPU:
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# ── Ollama model initialiser ──────────────────────────────────
# Pulls all required models on first run, then exits.
# Re-runs are fast (models already cached in ollama_data volume).
ollama-init:
image: ollama/ollama:0.4.1 # must match ollama service version
container_name: leadmetrics-ollama-init
restart: "no"
depends_on:
ollama:
condition: service_healthy
environment:
OLLAMA_HOST: http://ollama:11434
entrypoint: ["/bin/sh", "-c"]
command: >
"ollama pull gemma3:4b &&
ollama pull nomic-embed-text &&
ollama pull BAAI/bge-reranker-v2-m3 &&
echo 'All models ready.'"
# ── High-accuracy PDF parser (optional) ──────────────────────
# Only starts when --profile docling is passed.
# Required for Agency/Enterprise plan tenants using Docling PDF parsing.
docling:
image: quay.io/docling-project/docling-serve:0.4.0 # pin — never use latest in staging/prod
container_name: leadmetrics-docling
restart: unless-stopped
profiles: ["docling"]
ports:
- "5001:5000"
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:5000/health || exit 1"]
interval: 10s
timeout: 5s
retries: 5
# ── Connection pooler ────────────────────────────────────────
# PgBouncer sits between the app and PostgreSQL; reduces connection overhead.
# In local dev this is optional — apps can connect directly to postgres:5432.
# In staging/prod, POSTGRES_URL must point to pgbouncer:5433, not postgres:5432.
pgbouncer:
image: bitnami/pgbouncer:1.22.1 # pin
container_name: leadmetrics-pgbouncer
restart: unless-stopped
environment:
POSTGRESQL_HOST: postgres
POSTGRESQL_PORT: "5432"
POSTGRESQL_USERNAME: leadmetrics
POSTGRESQL_PASSWORD: leadmetrics
POSTGRESQL_DATABASE: leadmetrics
PGBOUNCER_PORT: "5433"
PGBOUNCER_POOL_MODE: transaction
PGBOUNCER_MAX_CLIENT_CONN: "100"
PGBOUNCER_DEFAULT_POOL_SIZE: "25"
ports:
- "5433:5433"
depends_on:
postgres: { condition: service_healthy }
# ── Object storage (S3-compatible) ───────────────────────────
# Replaces AWS S3 / DO Spaces in local development and on-prem deployments.
# Apps use @aws-sdk/client-s3 with S3_ENDPOINT=http://minio:9000 — same code,
# same key patterns, zero cloud cost locally.
minio:
image: minio/minio:RELEASE.2024-11-07T00-52-20Z # pin — never use latest
container_name: leadmetrics-minio
restart: unless-stopped
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: leadmetrics
MINIO_ROOT_PASSWORD: leadmetrics
volumes:
- minio_data:/data
ports:
- "9000:9000" # S3 API
- "9001:9001" # MinIO Console (web UI)
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:9000/minio/health/live || exit 1"]
interval: 5s
timeout: 5s
retries: 5
# ── MinIO bucket initialiser ──────────────────────────────────
# Creates the required bucket and sets the lifecycle policy on first run.
minio-init:
image: minio/mc:RELEASE.2024-11-17T19-35-25Z
container_name: leadmetrics-minio-init
restart: "no"
depends_on:
minio: { condition: service_healthy }
entrypoint: ["/bin/sh", "-c"]
command: >
"mc alias set local http://minio:9000 leadmetrics leadmetrics &&
mc mb --ignore-existing local/leadmetrics &&
mc anonymous set none local/leadmetrics &&
echo 'MinIO bucket ready.'"
# ── Email testing ─────────────────────────────────────────────
mailhog:
image: mailhog/mailhog
container_name: leadmetrics-mailhog
restart: unless-stopped
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
volumes:
postgres_data:
mongo_data:
redis_data:
qdrant_data:
ollama_data:
minio_data:docker-compose.apps.yml — Full-stack override
Compose this on top of docker-compose.yml to run all three Next.js apps and the Fastify API in containers — no local Node.js required. Useful for QA, demos, and CI full-stack runs.
# Usage:
# docker compose -f docker-compose.yml -f docker-compose.apps.yml up -d
version: "3.9"
services:
# ── Fastify API ───────────────────────────────────────────────
api:
build:
context: .
dockerfile: apps/api/Dockerfile
container_name: leadmetrics-api
restart: unless-stopped
ports:
- "3001:3001"
env_file: .env.local
environment:
NODE_ENV: development
POSTGRES_URL: postgresql://leadmetrics:leadmetrics@postgres:5432/leadmetrics
MONGO_URL: mongodb://leadmetrics:leadmetrics@mongo:27017/leadmetrics
REDIS_URL: redis://redis:6379
QDRANT_URL: http://qdrant:6333
OLLAMA_BASE_URL: http://ollama:11434
depends_on:
postgres: { condition: service_healthy }
mongo: { condition: service_healthy }
redis: { condition: service_healthy }
qdrant: { condition: service_healthy }
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:3001/health || exit 1"]
interval: 10s
timeout: 5s
retries: 5
# ── Dashboard (tenant-facing) ─────────────────────────────────
dashboard:
build:
context: .
dockerfile: apps/dashboard/Dockerfile
container_name: leadmetrics-dashboard
restart: unless-stopped
ports:
- "3000:3000"
env_file: .env.local
environment:
NODE_ENV: development
API_URL: http://api:3001
NEXTAUTH_URL: http://localhost:3000
depends_on:
api: { condition: service_healthy }
# ── DM Portal (internal team) ─────────────────────────────────
dm-portal:
build:
context: .
dockerfile: apps/dm-portal/Dockerfile
container_name: leadmetrics-dm-portal
restart: unless-stopped
ports:
- "3002:3002"
env_file: .env.local
environment:
NODE_ENV: development
API_URL: http://api:3001
NEXTAUTH_URL: http://localhost:3002
PORT: "3002"
depends_on:
api: { condition: service_healthy }
# ── Manage (super admin) ──────────────────────────────────────
manage:
build:
context: .
dockerfile: apps/manage/Dockerfile
container_name: leadmetrics-manage
restart: unless-stopped
ports:
- "3003:3003"
env_file: .env.local
environment:
NODE_ENV: development
API_URL: http://api:3001
NEXTAUTH_URL: http://localhost:3003
PORT: "3003"
depends_on:
api: { condition: service_healthy }Quick start — infra only (recommended for development)
# 1. Start all infrastructure services
docker compose up -d
# 2. Verify all services are healthy
docker compose ps
# 3. Reset and seed the database (first run or after a drop)
cd packages/db
DATABASE_URL="postgresql://leadmetrics:leadmetrics@localhost:5434/leadmetrics" pnpm db:push
pnpm db:seed # uses apps/api/.env for DATABASE_URL
# 4. Run each app/server (separate terminals or background processes):
# Next.js apps
cd apps/dashboard && pnpm dev # :3000
cd apps/manage && pnpm dev # :3001
cd apps/dm && pnpm dev # :3002
cd apps/api && pnpm dev # :3003 (Fastify REST API)
# Background servers (BullMQ workers — no HTTP port)
cd apps/servers/agents && pnpm dev
cd apps/servers/billing && pnpm dev
cd apps/servers/notifications && pnpm dev
cd apps/servers/ragengine && pnpm devBefore any testing session: always confirm all Docker containers are healthy (
docker compose ps) and all apps/servers are running before beginning tests.
Quick start — full stack in Docker (no Node.js required)
# Build and start everything
docker compose -f docker-compose.yml -f docker-compose.apps.yml up -d --build
# Run DB migrations (first run only)
docker compose -f docker-compose.yml -f docker-compose.apps.yml \
exec api pnpm db:migrate && pnpm db:seed
# Stream all logs
docker compose -f docker-compose.yml -f docker-compose.apps.yml logs -fService URLs (local)
Next.js apps & API
| Service | URL | Notes |
|---|---|---|
| Dashboard (tenant-facing) | http://localhost:3000 | apps/dashboard — main client app |
| Manage (super admin) | http://localhost:3001 | apps/manage |
| API (Fastify REST) | http://localhost:3003 | apps/api — Auth: /auth/v1, Admin: /admin/v1 |
| DM Portal (internal) | http://localhost:3002 | apps/dm |
BullMQ background servers (no HTTP port)
| Service | Package | Purpose |
|---|---|---|
| Agents server | apps/servers/agents | All AI agent BullMQ workers (setup, strategy, content, social, etc.) |
| Billing server | apps/servers/billing | Invoice generation, overdue checks, cron scheduler |
| Notifications server | apps/servers/notifications | Email/notification queue worker |
| Ragengine server | apps/servers/ragengine | RAG ingestion pipeline worker |
Docker infrastructure
| Service | Host port | Notes |
|---|---|---|
| PostgreSQL | localhost:5434 | leadmetrics / leadmetrics / db: leadmetrics |
| MongoDB | localhost:27017 | leadmetrics / leadmetrics |
| Redis | localhost:6379 | No auth in local dev |
| Qdrant HTTP | http://localhost:6333 | Vector DB REST API + Dashboard UI |
| Qdrant gRPC | localhost:6334 | gRPC (used internally by SDK) |
Ollama models
Three models are pulled automatically by ollama-init on first run:
| Model | Size | Purpose |
|---|---|---|
gemma3:4b | ~3 GB | Primary local LLM — classification, extraction, summarisation |
nomic-embed-text | ~270 MB | Text embeddings for RAG datasets (except competitor_content which uses this too) |
BAAI/bge-reranker-v2-m3 | ~570 MB | Cross-encoder reranker for hybrid RAG search |
To pull additional models manually:
docker compose exec ollama ollama pull llama3.2
docker compose exec ollama ollama pull mistralEnvironment variables (.env.local)
Copy .env.example to .env.local and fill in API keys. The Docker services use hardcoded local credentials — only external API keys need setting for local dev.
# ── LLM Providers ────────────────────────────────────────────
ANTHROPIC_API_KEY=
OPENAI_API_KEY=
OLLAMA_BASE_URL=http://localhost:11434 # http://ollama:11434 inside Docker
# ── Databases ─────────────────────────────────────────────────
POSTGRES_URL=postgresql://leadmetrics:leadmetrics@localhost:5432/leadmetrics
MONGO_URL=mongodb://leadmetrics:leadmetrics@localhost:27017/leadmetrics
# ── Queue ─────────────────────────────────────────────────────
REDIS_URL=redis://localhost:6379
# ── RAG / Vector store ────────────────────────────────────────
QDRANT_URL=http://localhost:6333
QDRANT_API_KEY= # empty for local Docker
RAG_WORKER_CONCURRENCY=5
RAG_UPLOAD_DIR=./uploads/rag
RAG_DEFAULT_EMBEDDING_PROVIDER=openai
RAG_DEFAULT_EMBEDDING_MODEL=text-embedding-3-small
RAG_LOCAL_EMBEDDING_MODEL=nomic-embed-text
RAG_RERANKER_MODEL=BAAI/bge-reranker-v2-m3
RAG_RERANKER_URL=http://localhost:11434
DOCLING_URL= # empty = disabled; http://localhost:5001 if --profile docling
# ── App ───────────────────────────────────────────────────────
NODE_ENV=development
NEXT_PUBLIC_APP_URL=http://localhost:3000
API_URL=http://localhost:3001
API_SECRET=dev-secret-change-in-prod
# ── Auth ──────────────────────────────────────────────────────
NEXTAUTH_SECRET=dev-nextauth-secret
NEXTAUTH_URL=http://localhost:3000
# ── Email ─────────────────────────────────────────────────────
SMTP_HOST=localhost
SMTP_PORT=1025
SMTP_FROM=noreply@leadmetrics.local # MailHog catches all mail in dev
# ── Marketing Integrations ────────────────────────────────────
GOOGLE_ADS_CLIENT_ID=
GOOGLE_ADS_CLIENT_SECRET=
META_ADS_APP_ID=
META_ADS_APP_SECRET=
SEMRUSH_API_KEY=
AHREFS_API_KEY=
DATAFORSEO_LOGIN=
DATAFORSEO_PASSWORD=
SLACK_BOT_TOKEN=
GOOGLE_SERVICE_ACCOUNT_JSON=
# ── Payments ─────────────────────────────────────────────────
RAZORPAY_KEY_ID=
RAZORPAY_KEY_SECRET=
# ── Secrets ───────────────────────────────────────────────────
ENCRYPTION_KEY= # AES-256-GCM key for OAuth token encryption
JWT_SECRET= # For task-scoped phone-home JWTs
# ── Observability ─────────────────────────────────────────────
OTEL_EXPORTER_OTLP_ENDPOINT= # empty = no tracing in local dev
OPS_SLACK_CHANNEL= # Slack channel for ops alertsDockerfile — Next.js (apps/web)
FROM node:22-alpine AS base
RUN corepack enable
FROM base AS deps
WORKDIR /app
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/web/package.json ./apps/web/
COPY packages/*/package.json ./packages/*/
RUN pnpm install --frozen-lockfile
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN pnpm --filter web build
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
# Security: run as non-root user
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public
USER nextjs
EXPOSE 3000
CMD ["node", "apps/web/server.js"]Dockerfile — Fastify API (apps/api)
FROM node:22-alpine AS base
RUN corepack enable
FROM base AS deps
WORKDIR /app
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/api/package.json ./apps/api/
COPY packages/*/package.json ./packages/*/
RUN pnpm install --frozen-lockfile --prod
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
# Security: run as non-root user
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 apiuser
COPY --from=deps --chown=apiuser:nodejs /app/node_modules ./node_modules
COPY --chown=apiuser:nodejs apps/api/dist ./apps/api/dist
COPY --chown=apiuser:nodejs packages/*/dist ./packages/*/dist
USER apiuser
EXPOSE 3001
CMD ["node", "apps/api/dist/index.js"]Container Security Rules
All production Docker images must follow these rules (enforced by CI docker scan step):
| Rule | Enforcement |
|---|---|
Non-root user (USER directive) | Required in every runner stage |
Pinned image tags (no latest) | ESLint-equivalent check in Dockerfile linter (hadolint) |
No privileged: true in compose | Compose validator in CI |
| DB ports not exposed externally | Compose validator checks 5432, 27017, 6379, 6333 have no external bind in staging/prod compose |
| No secrets in Dockerfile | truffleHog scans all Dockerfiles in CI |
| Base image CVE scan | docker scout runs on every build; critical CVEs block deploy |
Coolify — Deployment Platform
Coolify is a self-hosted PaaS that provides Heroku/Railway-style deployments on any VPS. It manages Docker containers, reverse proxy (Traefik), SSL certificates, secrets, and zero-downtime deploys.
Why Coolify
- Self-hosted: data never leaves our VPS; required for enterprise on-prem
- Git-based deploys: push to branch → Coolify auto-deploys
- Docker Compose support: deploys the same
docker-compose.ymlused locally - Environment management: separate dev/staging/prod with isolated secrets
- Free and open source: no vendor lock-in
Server Setup (one-time)
# On a fresh Ubuntu 22.04 VPS (min 4 vCPU, 8GB RAM, 80GB SSD)
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
# Access Coolify at: http://<server-ip>:8000
# Complete setup wizard, add SSH key, configure domainEnvironment Structure in Coolify
Coolify Server
└── Project: dmagency
├── Environment: production
│ ├── Service: web (Next.js) main branch
│ ├── Service: api (Fastify) main branch
│ ├── Service: postgres
│ ├── Service: mongo
│ ├── Service: redis
│ └── Domain: app.dmagency.io → web:3000
│
├── Environment: staging
│ ├── Service: web staging branch
│ ├── Service: api staging branch
│ ├── Databases (separate instances)
│ └── Domain: staging.dmagency.io
│
└── Environment: dev
├── Service: web develop branch
├── Service: api develop branch
├── Databases (separate instances)
└── Domain: dev.dmagency.ioDeploy Flow
Developer pushes to 'main'
│
▼
GitHub Actions runs:
1. pnpm lint
2. pnpm test:unit
3. pnpm test:integration (Docker services spun up)
4. pnpm test:e2e (full stack in Docker)
│
all pass?
│
▼
GitHub Actions calls Coolify webhook:
POST https://coolify.dmagency.io/api/v1/deploy?uuid={service-uuid}
Authorization: Bearer {COOLIFY_WEBHOOK_TOKEN}
│
▼
Coolify:
1. Pulls latest code
2. Builds new Docker image
3. Runs health check on new container
4. Zero-downtime swap (Traefik routes traffic to new container)
5. Removes old containerZero-Downtime Deploy
Coolify uses Traefik as the reverse proxy. During a deploy:
- New container starts, exposes health check endpoint
- Traefik waits for
GET /healthto return 200 - Once healthy, Traefik shifts traffic to new container
- Old container receives a SIGTERM, finishes in-flight requests, then exits
// apps/api/src/health.ts
fastify.get('/health', async () => ({
status: 'ok',
postgres: await checkPostgresConnection(),
mongo: await checkMongoConnection(),
redis: await checkRedisConnection(),
}));Secrets Management
Secrets are managed per-environment in Coolify’s UI and injected as environment variables at container startup. Never committed to the repository.
In addition, Doppler syncs secrets to Coolify environments:
# Sync Doppler secrets to Coolify environment
doppler secrets download --no-file --format env > .env
# Coolify picks up .env during buildEnterprise On-Prem Deployment
The customer runs Coolify on their own infrastructure. We ship:
- Docker images (published to a private registry)
docker-compose.enterprise.yml(enterprise-specific defaults)- Coolify project export (environment configuration template)
On-prem docker-compose.enterprise.yml
# Key differences from standard compose:
# - No Ollama exposed externally (internal network only)
# - PostgreSQL and MongoDB with stronger passwords + backup volumes
# - No MailHog (uses customer's SMTP)
# - Environment set to on_prem mode
services:
web:
image: registry.dmagency.io/web:latest
environment:
DEPLOYMENT_MODE: on_prem
SINGLE_TENANT_ID: ${TENANT_SLUG}
api:
image: registry.dmagency.io/api:latest
environment:
DEPLOYMENT_MODE: on_prem
ollama:
image: ollama/ollama
# Not port-exposed externally — internal network only
networks:
- internal
postgres:
image: postgres:16
volumes:
- /data/dmagency/postgres:/var/lib/postgresql/data # persistent, customer-managed
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} # customer sets this
mongo:
image: mongo:7
volumes:
- /data/dmagency/mongo:/data/db
networks:
internal:
driver: bridgeUpdate process (on-prem)
# Customer runs (or automates via Coolify):
docker compose pull # pull latest images
docker compose up -d # rolling restartUpdates are shipped as new Docker image tags. Customers control when they update.
Backup Strategy
SaaS (Coolify-managed)
| Data | Backup method | Frequency | Retention |
|---|---|---|---|
| PostgreSQL | Coolify automated backup to S3 | Every 6 hours | 30 days |
| MongoDB | mongodump to S3 | Every 6 hours | 30 days |
| Redis | RDB snapshot to S3 | Daily | 7 days |
| Skills library (files) | Synced to S3 | On change | Indefinite |
Enterprise on-prem
Customer is responsible for backups. We provide:
scripts/backup.sh— runspg_dump+mongodump+ compresses to a tarball- Documentation for restoring from backup
- Recommendation: run backup script via cron + ship to their S3/NAS
Monitoring & Alerting
Health checks
GET /healthon both Next.js and Fastify services- Coolify monitors health and restarts containers that fail checks
Logs
- All container logs shipped to a central log aggregator (Grafana Loki or papertrail)
- Structured JSON logs from all services (pino logger)
- Application-level events stored in MongoDB for the UI Logs screen
Alerts (Slack)
- Container restart → Slack alert
- BullMQ queue stall (no workers processing) → Slack alert
- Tenant budget cap hit → Slack alert (per tenant)
- Database connection failure → Slack alert
// Alert wrapper used throughout the codebase
async function alertOps(message: string, context?: Record<string, unknown>) {
await slackClient.postMessage({
channel: process.env.OPS_SLACK_CHANNEL!,
text: `🚨 ${message}`,
blocks: context ? buildContextBlocks(context) : undefined,
});
}Security Hardening
HTTP Security Headers
Security headers are set at two levels:
1. Traefik (reverse proxy) — global, applied to all services:
# Coolify Traefik middleware config (applied to all ingress routes)
traefik.http.middlewares.security-headers.headers.stsSeconds: "31536000"
traefik.http.middlewares.security-headers.headers.stsIncludeSubdomains: "true"
traefik.http.middlewares.security-headers.headers.stsPreload: "true"
traefik.http.middlewares.security-headers.headers.forceSTSHeader: "true"
traefik.http.middlewares.security-headers.headers.contentTypeNosniff: "true"
traefik.http.middlewares.security-headers.headers.frameDeny: "true"
traefik.http.middlewares.security-headers.headers.referrerPolicy: "strict-origin-when-cross-origin"
traefik.http.middlewares.security-headers.headers.browserXssFilter: "true"2. Next.js next.config.ts — per-app headers including CSP:
const securityHeaders = [
{ key: 'Content-Security-Policy', value: [
"default-src 'self'",
"script-src 'self'",
"style-src 'self' 'unsafe-inline'", // required for Tailwind CSS
"img-src 'self' data: https:",
`connect-src 'self' ${process.env.NEXT_PUBLIC_API_URL}`,
"font-src 'self'",
"frame-ancestors 'none'",
].join('; ') },
{ key: 'Cross-Origin-Opener-Policy', value: 'same-origin' },
{ key: 'Cross-Origin-Resource-Policy', value: 'same-origin' },
{ key: 'Permissions-Policy', value: 'camera=(), microphone=(), geolocation=()' },
];
// Applied in next.config.ts headers() callback for all routesProduction Port Exposure
In production Docker Compose (not shown here — managed by Coolify), database ports are not bound to external interfaces. Only the reverse proxy ports are exposed:
| Service | Dev (local) | Staging/Prod |
|---|---|---|
| PostgreSQL :5432 | 127.0.0.1:5432 | Internal Docker network only |
| MongoDB :27017 | 127.0.0.1:27017 | Internal Docker network only |
| Redis :6379 | 127.0.0.1:6379 | Internal Docker network only |
| Qdrant :6333 | 127.0.0.1:6333 | Internal Docker network only |
| Fastify API :3001 | 0.0.0.0:3001 | Internal Docker network only (via Traefik) |
| Next.js apps | 0.0.0.0:3000-3003 | Internal Docker network only (via Traefik) |
| Traefik :443 | — | 0.0.0.0:443 (only externally exposed port) |
CI Secret Scanning
Every push and PR runs two secret scanning tools:
# .github/workflows/security.yml
- name: truffleHog — scan diff for committed secrets
uses: trufflesecurity/trufflehog@main
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD
- name: GitHub Advanced Security — secret scanning
# Enabled at repo level in GitHub settings
# Alerts on any matched secret pattern in the full repo historyAny detected secret immediately blocks the PR. Developers who accidentally commit a secret must rotate it immediately — the scan alert includes the secret type and the commit SHA.