AI Visibility — Architecture
Database Schema
Models
AIVisibilityPlatform — global registry of supported LLM platforms (seeded, not tenant-editable):
| Field | Type | Notes |
|---|---|---|
id | String (cuid) | |
key | String | chatgpt | claude | gemini | perplexity |
displayName | String | Shown in UI |
model | String | e.g. gpt-4.1, claude-sonnet-4-5 |
apiKeyEnvVar | String | Name of env var checked for key presence |
isEnabled | Boolean | Global on/off (admin-controlled) |
TenantAIVisibilityPlatform — per-tenant platform toggle:
| Field | Type | Notes |
|---|---|---|
tenantId | String | FK → Tenant |
platformId | String | FK → AIVisibilityPlatform |
isEnabled | Boolean | Defaults to true if no record exists |
AIVisibilityPrompt — prompts monitored per tenant:
| Field | Type | Notes |
|---|---|---|
tenantId | String | |
prompt | String | Full prompt text sent to LLMs |
promptType | String | local | category | brand | competitor |
isActive | Boolean | Only active prompts are sent on each run |
AIVisibilitySnapshot — one row per prompt × platform per run:
| Field | Type | Notes |
|---|---|---|
tenantId | String | |
promptId | String | FK → AIVisibilityPrompt |
platform | String | Platform key |
isMentioned | Boolean | Whether brand name appears in LLM response |
rawExcerpt | String? | ±150 chars around first brand mention |
citedSources | String[] | URLs extracted from response |
competitors | String[] | Competitor names detected in response |
runAt | DateTime | When the check was executed |
AIVisibilityNarrative — brand narrative generated after each run:
| Field | Type | Notes |
|---|---|---|
tenantId | String | |
period | String | last_7_days | last_30_days |
status | String | pending | generating | done | failed |
narrative | String? | Markdown text from brand-narrative-analyst |
BullMQ Workers
agent__ai-visibility-monitor
File: packages/agents/src/workers/ai-visibility-monitor.worker.ts
Job data: AIVisibilityMonitorJobData — { tenantId, tenantName, promptIds? }
Flow:
- Load globally enabled platforms, filter by tenant preferences and
apiKeyEnvVarpresence - Load active prompts for the tenant (all, or specific
promptIdsif provided) - Reserve
ai_visibilitycredits - For each prompt × platform: call LLM, parse brand mentions + cited URLs + competitor names, write
AIVisibilitySnapshot - Consume credits if any snapshots were created; release if none
- Enqueue
brand-narrative-analystjob after a successful run
LLM callers:
| Platform | Client | Env vars |
|---|---|---|
| ChatGPT | AzureOpenAI (openai SDK) | AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_DEPLOYMENT, AZURE_OPENAI_API_VERSION |
| Claude | Anthropic SDK | ANTHROPIC_API_KEY |
| Gemini | Direct fetch to generativelanguage.googleapis.com | GOOGLE_GENERATIVE_AI_KEY |
| Perplexity | Direct fetch to api.perplexity.ai | PERPLEXITY_API_KEY |
Nightly schedule: registered on worker startup, runs at 0 2 * * * UTC.
agent__brand-narrative-analyst
File: packages/agents/src/workers/brand-narrative-analyst.worker.ts
Auto-triggered after each monitor run. Reads the latest snapshots and produces a human-readable narrative (AIVisibilityNarrative) describing how the brand is perceived across AI platforms.
API Routes
All routes under /tenant/v1/ai-visibility in apps/api/src/routers/ai-visibility.ts. Require tenant JWT auth.
| Method | Path | Description |
|---|---|---|
| GET | /platforms | List platforms with apiKeyPresent + tenant enabled state |
| PATCH | /platforms/:platformId | Toggle a platform on/off for this tenant |
| GET | /prompts | List tenant’s monitored prompts |
| POST | /prompts | Create a new prompt |
| PATCH | /prompts/:promptId | Edit/toggle a prompt |
| DELETE | /prompts/:promptId | Delete a prompt |
| GET | /snapshots | Latest snapshot per prompt × platform |
| GET | /stats | GEO score, per-platform mention rate, trend data |
| POST | /run | Enqueue an on-demand monitor run |
| GET | /narrative | Latest brand narrative for the tenant |
| GET | /history | Snapshot history grouped by run date |
apiKeyPresent Check
Both the API and the Next.js dashboard server component check process.env[p.apiKeyEnvVar] to determine whether to show “API key missing”. This means the keys must be present in both apps/api/.env and apps/dashboard/.env.local even though the actual LLM calls only happen in apps/servers/agents/.env.
Three files must have the keys:
| File | Purpose |
|---|---|
apps/servers/agents/.env | Actual LLM calls in the worker |
apps/api/.env | apiKeyPresent check in /platforms endpoint |
apps/dashboard/.env.local | apiKeyPresent check in page.tsx server component |
Seed Data
packages/db/src/seed.ts → seedAIVisibilityPlatforms() upserts the four platform records on every seed run. Re-run pnpm db:seed (from packages/db) after changing apiKeyEnvVar or model values.
Current seeded values:
| key | displayName | model | apiKeyEnvVar |
|---|---|---|---|
chatgpt | ChatGPT (GPT-4.1) | gpt-4.1 | AZURE_OPENAI_API_KEY |
gemini | Google Gemini 2.0 | gemini-2.0-flash | GOOGLE_GENERATIVE_AI_KEY |
perplexity | Perplexity | llama-3.1-sonar-small-128k-online | PERPLEXITY_API_KEY |
claude | Claude (Sonnet) | claude-sonnet-4-5 | ANTHROPIC_API_KEY |