AI Search — Feature Gap Analysis & Roadmap
Compared SEMrush’s AI Visibility Toolkit and Position Tracking against Leadmetrics’ existing SEO/analytics stack. Leadmetrics has strong traditional SEO foundations; the entire AI search visibility layer is missing and represents the most strategic opportunity.
What Leadmetrics Already Has
| Feature | Location |
|---|---|
| Google Search Console integration (clicks, impressions, CTR, avg position) | packages/providers/google/src/google-search-console.ts |
| GSC Insights AI analyst (SWOT + recommendations) | packages/agents/src/workers/insights/gsc-insights.worker.ts |
| GA4 integration (sessions, sources, pages, organic, country) | packages/providers/google/src/google-analytics.ts |
| GA Insights AI analyst | packages/agents/src/workers/insights/ga-insights.worker.ts |
| Bing Webmaster Tools integration | packages/providers/microsoft |
| Keyword Researcher agent (clusters: branded, informational, local, competitor gap, long-tail) | packages/agents/src/workers/keyword-researcher.worker.ts |
| Content Brief Writer agent | packages/agents/src/workers/content-brief-writer.worker.ts |
| Site Auditor agent (technical SEO) | packages/agents/src/workers/site-auditor.worker.ts |
| Backlink Researcher + Outreach Writer agents | packages/agents/src/workers/backlink-researcher.worker.ts |
| Topic Researcher agent | packages/agents/src/workers/topic-researcher.worker.ts |
| On-page SEO score for blog posts | apps/dashboard/src/app/(dashboard)/blog/[id]/BlogPostDetail.tsx |
| Strategy planner includes keyword research + content briefs | packages/agents/src/workers/strategy-writer.worker.ts |
Feature Gaps
Tier 1 — Directly leverages existing AI infrastructure (highest ROI)
1. LLM Brand Visibility Tracking — “AI Share of Voice”
What SEMrush does: Periodically send brand-relevant prompts to ChatGPT, Gemini, Perplexity, and Copilot. Record whether the brand is mentioned, how it’s described, and which sources are cited. Compute a running visibility score and share of voice vs competitors.
Why this is the #1 priority for Leadmetrics: Leadmetrics already queries multiple LLM providers (Claude, Gemini, OpenAI-compatible). The infrastructure to send prompts and parse responses exists today. This is the single most differentiated report Leadmetrics can deliver — “ChatGPT recommends your competitor 8/10 times but you 2/10 times for ‘best plumber in Austin’” is a compelling, sticky insight no generic marketing tool delivers inside a full content workflow.
How to build:
- New DB models:
AIVisibilityPrompt(monitored prompts per tenant, with industry/intent tags) +AIVisibilitySnapshot(per-prompt results per LLM per day: mentioned yes/no, sentiment, cited sources, raw response excerpt) - New BullMQ worker:
packages/agents/src/workers/ai-visibility-monitor.worker.ts- Sends each monitored prompt to: ChatGPT (OpenAI API), Gemini (Google Generative AI), Perplexity API
- Parses responses: does the brand name appear? What surrounding context? What URLs are cited?
- Stores snapshot per prompt per LLM
- Runs nightly (or on-demand trigger)
- New dashboard page:
apps/dashboard/src/app/(dashboard)/ai-search/- Overall visibility score card (% of prompts where brand is mentioned)
- Per-LLM breakdown (ChatGPT vs Gemini vs Perplexity)
- Per-prompt result table with trend sparklines
- Cited sources list (shows what content AI is using instead of yours)
Files to create/modify:
packages/agents/src/workers/ai-visibility-monitor.worker.ts— newapps/api/src/routers/ai-visibility.ts— new CRUD for prompts + snapshot queryapps/dashboard/src/app/(dashboard)/ai-search/— new section- DB:
AIVisibilityPrompt,AIVisibilitySnapshotmodels
2. AI Competitor Gap Analysis
What SEMrush does: Compare your brand vs up to 3 competitors across the same monitored prompts. Identify which prompts mention them but not you, what sources AI cites for them.
How to build: A query layer on top of the LLM visibility data from #1. When storing snapshots, also run each prompt mentioning a competitor brand and store their result alongside yours. The gap analysis is then a JOIN across prompt snapshots filtered by competitor brand name mentions. Surface gaps as content opportunities that feed directly into the content-brief-writer agent — closing the loop between “AI doesn’t mention you here” → “generate content to fix this”.
Files to create/modify:
AIVisibilityCompetitorDB model (per-tenant competitor list for AI visibility)apps/dashboard/src/app/(dashboard)/ai-search/competitors/— new page- New
content-brief-writertrigger: “AI gap brief” generation from competitor gap data
3. Brand Sentiment & Narrative Analysis in AI Responses
What SEMrush does: Classify AI mentions as positive/negative/neutral. Track narrative themes (“affordable”, “unreliable”, “innovative”). Show brand perception per LLM platform.
How to build: The review-response-writer already does sentiment classification on reviews. Apply the same pattern to AI response parsing. After storing the raw response excerpt for each brand mention (#1), pass it through a brand-narrative-analyst insight worker to extract:
- Sentiment score (positive/negative/neutral)
- Key adjectives/attributes associated with the brand
- Topics the brand is cited for vs topics it’s excluded from
New worker: packages/agents/src/workers/insights/brand-narrative-analyst.worker.ts
Dashboard: Narrative tab under the ai-search/ page — shows attribute cloud, sentiment trend, per-LLM perception differences
4. Prompt Research / AI Topic Demand
What SEMrush does: Show real prompts people ask AI in a given industry, with AI Volume (frequency), Difficulty (competition), and Intent (informational/transactional).
How to build: Extend topic-researcher.worker.ts with a second phase: after discovering candidate topics, send each as a prompt to the configured LLMs and record:
- Whether any brand is cited (if not, opportunity)
- How many competing brands are cited (difficulty proxy)
- Intent classification of the topic
This repurposes the existing topic research output into AI-era prompt research without building a separate agent. The results populate the AI search dashboard with “Topic Opportunities” — topics with high relevance, low competition in AI responses.
Files to modify:
packages/agents/src/workers/topic-researcher.worker.ts— add LLM visibility validation phase- Dashboard: add Prompt Research tab under
ai-search/
Tier 2 — Meaningful new surface, builds on existing data
5. Keyword Position History & Rank Change Alerts
What SEMrush does: Daily keyword position snapshots stored with full history. Alerts when positions drop significantly.
Leadmetrics gap: GSC data is fetched live on demand. The gsc-keywords-fetch-v2 queue job is referenced in docs but the worker doesn’t exist. Without history, there’s no trend tracking and no way to know if SEO efforts are paying off.
How to build:
- New BullMQ job:
packages/agents/src/workers/jobs/gsc-keywords-snapshot.job.ts— nightly GSC keyword position snapshot →GSCKeywordSnapshottable - Update the GSC channel detail page to show 90-day position history chart per keyword
- Notification trigger: if a top-10 keyword drops more than N positions, fire an alert via existing notification infrastructure
Files to create:
packages/agents/src/workers/jobs/gsc-keywords-snapshot.job.ts— new- DB:
GSCKeywordSnapshotmodel - Update
apps/dashboard/src/app/(dashboard)/channels/[id]/GoogleSearchConsoleChannelDetail.tsx— add history chart
6. Google AI Overview (AIO) Tracking
What SEMrush does: Detect when your page appears in Google’s AI-generated overview at the top of search results for a tracked keyword.
How to build: Requires querying Google’s SERP and parsing the response for AIO presence. Options:
- Use a SERP API (ValueSERP, BrightLocal, or DataForSEO) to fetch SERP HTML for tracked keywords
- Parse for AIO block and whether the client’s domain is cited in it
- Store result per keyword per day alongside the standard GSC position data
Dependency: SERP API subscription (evaluating ValueSERP or DataForSEO)
Tier 3 — Out of scope / third-party data required
7. AI Volume / Prompt Frequency Scores
What SEMrush does: Quantified “AI Volume” — how often a specific prompt is sent to AI tools globally.
Why skip: SEMrush has proprietary panel data and partnerships to compute this metric at scale. It cannot be replicated without similar infrastructure. Leadmetrics can approximate “relevance” of a prompt using LLM responses and engagement signals, but not true volume.
8. Full Looker Studio / PDF Report Automation
What SEMrush does: Auto-generate branded PDF reports combining AI visibility data with SEO metrics.
Leadmetrics path: The report-writer agent already exists. Once AI visibility data is stored in the DB, it can be included in the monthly report template. Low effort once #1–#5 are built.
Implementation Roadmap
Phase 1 — LLM Visibility Foundation
Step 0 — Competitor Model (Foundational, shared across all phases)
Current state: Competitor data today is stored as plain text inside the ClientContext markdown document — produced by the competitor-researcher agent during setup but never persisted in a structured DB record. There is no Competitor model, no way to query competitors, and no shared competitor list for the AI visibility monitor, keyword researcher, or backlink researcher to reference.
Why it needs to be built first: Phase 1’s AI visibility monitor needs the competitor list to detect competitor brand mentions in LLM responses. Phase 2’s gap analysis queries which prompts mention competitors but not the client. Phase 3’s keyword researcher uses competitor domains for gap clusters. One central model serves all of them.
Add to packages/db/prisma/schema.prisma:
model Competitor {
id String @id @default(cuid())
tenantId String
name String // Brand name — used for LLM mention detection and keyword gap
domain String? // Website domain — used for backlink gap and SEO analysis
isActive Boolean @default(true)
notes String? @db.Text
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
tenant Tenant @relation(fields: [tenantId], references: [id])
@@index([tenantId])
@@index([tenantId, isActive])
}Auto-populated on setup: When the competitor-researcher agent completes and its output is written to ClientContext, a post-processing step parses competitor names and domains from the agent output and creates Competitor records. Same pattern as how keywords are auto-created from the keyword-researcher output.
Management UI: /settings/competitors page in the dashboard — simple list with name + domain, add / edit / delete. Also available in the DM portal at the same path so DMs can keep it current between strategy cycles.
API router: New file apps/api/src/routers/competitors.ts, registered at /tenant/v1/competitors.
| Method | Path | What it does |
|---|---|---|
GET | / | List all competitors (active + inactive) |
POST | / | Add a competitor manually |
PATCH | /:id | Edit name / domain / active flag / notes |
DELETE | /:id | Remove competitor |
Where the Competitor model is consumed:
| Feature | Uses | How |
|---|---|---|
| AI Visibility Monitor | name | Detect competitor brand names in LLM responses → store in AIVisibilitySnapshot.competitors[] |
| AI Competitor Gap (Phase 2) | name | Query which prompts mentioned a competitor but not the client |
| Keyword Researcher agent | name + domain | Competitor gap keyword clusters (already a category in the agent output) |
| Backlink Researcher agent | domain | Find competitor backlink sources the client is missing |
Step 1 — DB Models
Add to packages/db/prisma/schema.prisma:
// Global platform registry — seeded, managed by superadmin
model AIVisibilityPlatform {
id String @id @default(cuid())
key String @unique // "chatgpt" | "gemini" | "perplexity" | "claude"
displayName String // "ChatGPT (GPT-4o)", "Google Gemini 2.0", etc.
model String // "gpt-4o", "gemini-2.0-flash", "claude-sonnet-4-6"
apiKeyEnvVar String // env var name that holds the API key
isEnabled Boolean @default(true)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
tenantPlatforms TenantAIVisibilityPlatform[]
}
// Per-tenant platform selection
model TenantAIVisibilityPlatform {
tenantId String
platformId String
isEnabled Boolean @default(true)
tenant Tenant @relation(fields: [tenantId], references: [id])
platform AIVisibilityPlatform @relation(fields: [platformId], references: [id])
@@id([tenantId, platformId])
@@index([tenantId])
}
model AIVisibilityPrompt {
id String @id @default(cuid())
tenantId String
prompt String @db.Text
intent String // "informational" | "transactional" | "navigational"
category String // "local" | "brand" | "category" | "competitor"
isActive Boolean @default(true)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
tenant Tenant @relation(fields: [tenantId], references: [id])
snapshots AIVisibilitySnapshot[]
@@index([tenantId])
@@index([tenantId, isActive])
}
model AIVisibilitySnapshot {
id String @id @default(cuid())
tenantId String
promptId String
platform String // matches AIVisibilityPlatform.key
isMentioned Boolean
sentiment String? // "positive" | "neutral" | "negative"
rawExcerpt String? @db.Text
citedSources String[]
competitors String[]
runAt DateTime @default(now())
tenant Tenant @relation(fields: [tenantId], references: [id])
prompt AIVisibilityPrompt @relation(fields: [promptId], references: [id], onDelete: Cascade)
@@index([tenantId])
@@index([promptId])
@@index([tenantId, platform])
@@index([tenantId, runAt])
}Step 2 — Platform Configuration
Why a separate config table (not hardcoded in the worker):
The existing adapter system (claude_local, gemini_local, codex_local) uses CLI subprocesses and is designed for single-provider content generation. The AI visibility monitor is fundamentally different — it calls multiple external LLM APIs simultaneously for the same prompt. It needs its own configuration layer.
Seeded platforms (added in packages/db/prisma/seed.ts):
| key | displayName | model | apiKeyEnvVar |
|---|---|---|---|
chatgpt | ChatGPT (GPT-4o) | gpt-4o | OPENAI_API_KEY |
gemini | Google Gemini 2.0 | gemini-2.0-flash | GOOGLE_GENERATIVE_AI_KEY |
perplexity | Perplexity | llama-3.1-sonar-large-128k-online | PERPLEXITY_API_KEY |
claude | Claude (Sonnet) | claude-sonnet-4-6 | ANTHROPIC_API_KEY |
New env vars to add to .env.example:
GOOGLE_GENERATIVE_AI_KEY— direct Gemini API (separate from the OAuth-based GSC/GA4 credentials)PERPLEXITY_API_KEY— Perplexity API
OPENAI_API_KEY and ANTHROPIC_API_KEY already exist.
Extensibility: Adding a new platform (e.g., Microsoft Copilot) requires only a new seed row and a new env var — no worker code changes. The worker reads all enabled platforms from DB at runtime and skips any whose apiKeyEnvVar is not set in process.env.
Who manages what:
| Who | Where | What they control |
|---|---|---|
| Superadmin | Manage portal → Agent Settings | Global on/off per platform, model selection |
| DM / Client | Dashboard → AI Search Settings | Which platforms are enabled for their tenant |
| DevOps | .env | API keys (never stored in DB) |
Platform management API endpoints (on the superadmin manage server):
| Method | Path | What it does |
|---|---|---|
GET | /admin/v1/ai-visibility/platforms | List all platforms with enabled status |
PATCH | /admin/v1/ai-visibility/platforms/:id | Toggle global isEnabled, update model |
Tenant platform selection endpoints (on the main API, tenant-scoped):
| Method | Path | What it does |
|---|---|---|
GET | /tenant/v1/ai-visibility/platforms | List platforms available to this tenant with their enabled state |
PATCH | /tenant/v1/ai-visibility/platforms/:platformId | Toggle platform on/off for this tenant |
What Are Monitored Prompts?
These are questions real users type into ChatGPT, Gemini, or Perplexity when looking for a business like the client. Each prompt is sent verbatim to each LLM platform on a nightly schedule. The response is parsed for:
- isMentioned — does the client’s brand name appear in the response?
- rawExcerpt — the surrounding sentence(s) where the brand is mentioned
- citedSources — URLs the LLM referenced in its response
- competitors — other brand names that appeared (matched against the tenant’s competitor list)
Sample prompts for a plumbing business in Austin, TX:
| Category | Prompt | Intent |
|---|---|---|
| local | ”Who are the best plumbers in Austin TX?“ | transactional |
| local | ”Best emergency plumber near me in Austin” | transactional |
| category | ”Which plumbing company should I call for a water heater leak?“ | transactional |
| informational | ”What should I look for when hiring a plumber?“ | informational |
| competitor | ”Is [CompetitorName] a good plumbing service?“ | navigational |
| brand | ”Tell me about [ClientBrandName] plumbing” | navigational |
How prompts are populated:
- Auto-seeded on setup — when the client context pipeline completes, a starter set of prompts is generated from the tenant’s business name, location, industry, and competitor list. Similar to how the Keyword Researcher auto-generates keyword clusters.
- Manual management — DM or client can add, edit, toggle, and delete prompts via the
/ai-search/promptsdashboard page.
Step 3 — BullMQ Worker
New file: packages/agents/src/workers/ai-visibility-monitor.worker.ts
- Queue:
agent__ai-visibility-monitor - Job payload:
{ tenantId, promptIds? }—promptIds: nullmeans run all active prompts - Job ID pattern:
ai-visibility__${tenantId}__${Date.now()}(timestamp = always fresh, no dedup) - Nightly schedule: BullMQ
repeatoption on worker startup (cron0 2 * * *) - Concurrency: 2 (I/O-bound — waiting on external LLM APIs)
Worker flow per job:
- Load all
AIVisibilityPlatformrecords whereisEnabled = true - For each platform, check
TenantAIVisibilityPlatform— skip if tenant has it disabled - Check
process.env[platform.apiKeyEnvVar]— skip with warning log if API key not set - Load all active
AIVisibilityPromptrecords fortenantId - For each prompt × each enabled platform:
- Send prompt to the platform API using
platform.model - Parse response: check if
tenant.nameappears (case-insensitive) - Extract surrounding sentences as
rawExcerpt - Pull any URLs from response as
citedSources - Detect other brand names as
competitorsagainst the tenant’s competitor list CREATE AIVisibilitySnapshotwithplatform: platform.key
- Send prompt to the platform API using
- Emit
agent:eventWebSocket event so dashboard updates live
Step 4 — API Router
New file: apps/api/src/routers/ai-visibility.ts, registered at /tenant/v1/ai-visibility in both app.ts (tests) and index.ts (server).
Prompt management:
| Method | Path | What it does |
|---|---|---|
GET | /prompts | List all prompts for the tenant (active + inactive) |
POST | /prompts | Add a new monitored prompt |
PATCH | /prompts/:id | Toggle active, edit prompt text / category / intent |
DELETE | /prompts/:id | Delete prompt + cascade snapshots |
Snapshot queries:
| Method | Path | What it does |
|---|---|---|
GET | /snapshots | Latest snapshot per prompt per platform (for overview dashboard) |
GET | /snapshots/history | Historical snapshots for trend charts |
POST | /run | Trigger an on-demand visibility check run |
Platform selection (tenant-scoped):
| Method | Path | What it does |
|---|---|---|
GET | /platforms | List all globally enabled platforms with this tenant’s enabled state |
PATCH | /platforms/:platformId | Toggle a platform on/off for this tenant |
Step 5 — Dashboard Pages
New section: apps/dashboard/src/app/(dashboard)/ai-search/
| Route | Content |
|---|---|
/ai-search | Overview: score cards per platform + latest results table per prompt |
/ai-search/prompts | Manage monitored prompts — add / edit / toggle / delete |
/ai-search/settings | Platform selection — toggle which LLMs to check (ChatGPT, Gemini, etc.) |
/ai-search/history | Trend charts — visibility score over time per platform |
Overview page layout:
- Score cards: “Mentioned in X% of prompts on ChatGPT / Gemini / Perplexity” (one card per enabled platform)
- Per-prompt results table: prompt text → per-platform ✓/✗ columns → last run date → raw excerpt on expand
- “Run Now” button triggers
POST /tenant/v1/ai-visibility/run
Settings page layout:
- Platform toggles — enable/disable ChatGPT, Gemini, Perplexity, Claude per tenant
- Only shows platforms that are globally enabled by superadmin and have their API key configured
Step 6 — Sidebar Navigation
Add “AI Search” to apps/dashboard/src/components/sidebar.tsx under the SEO NavGroup, following the existing NavLeaf pattern.
Phase 2 — Competitor Gap + Sentiment
Builds directly on the Competitor model (Step 0) and the snapshot data collected in Phase 1.
Competitor Gap Analysis
What it answers: “Which prompts mention a competitor but not us?”
Query pattern: for each active Competitor, find all AIVisibilitySnapshot rows where competitors array contains competitor.name AND isMentioned = false for the same promptId + platform + same runAt window. Each match is a “gap” — the AI recommended a competitor instead of the client for that prompt on that platform.
Surface these gaps as content opportunities that feed into the content-brief-writer agent: “AI doesn’t mention you for this prompt — generate content to fix this.”
Dashboard page: apps/dashboard/src/app/(dashboard)/ai-search/competitors/
- Side-by-side score comparison: client vs each competitor across all monitored prompts
- Gap table: prompt → which competitor is mentioned → which platforms → suggested content action
- “Generate Brief” button per gap → triggers
content-brief-writerwith gap context as input
Brand Sentiment & Narrative Analysis
New worker: packages/agents/src/workers/insights/brand-narrative-analyst.worker.ts
- Queue:
agent__brand-narrative-analyst - Trigger: runs after each nightly AI visibility monitor run completes
- Input: all
AIVisibilitySnapshotrows for the tenant whereisMentioned = trueandrawExcerptis set - Output: structured JSON stored on a new
AIVisibilityNarrativeDB model:sentiment:"positive"|"neutral"|"negative"per platformattributes: string array — key adjectives/phrases (“affordable”, “local expert”, “unreliable”)citedTopics: topics the brand is cited formissingTopics: topics competitors are cited for but the client isn’tsummary: narrative paragraph
Dashboard: Narrative tab under /ai-search — attribute cloud, sentiment trend per platform, per-LLM perception comparison.
Phase 3 — Prompt Research + Position History
Keyword Position History & Rank Change Alerts
Current gap: GSC data is fetched live on demand. No historical snapshots are stored. Without history, position trends are invisible and there is no way to alert on dips or confirm that SEO efforts are working.
New DB model:
model GSCKeywordSnapshot {
id String @id @default(cuid())
tenantId String
channelId String // ConnectedChannel ID for the GSC connection
keyword String
position Float // Average position (GSC returns fractional values)
clicks Int
impressions Int
ctr Float
snapshotDate DateTime // Normalized to midnight UTC — one row per keyword per day
tenant Tenant @relation(fields: [tenantId], references: [id])
channel ConnectedChannel @relation(fields: [channelId], references: [id])
@@unique([channelId, keyword, snapshotDate])
@@index([tenantId, snapshotDate])
@@index([channelId, keyword])
}Nightly job: packages/agents/src/workers/jobs/gsc-keywords-snapshot.job.ts
- Runs at
0 3 * * *(after AI visibility monitor at 2am) - For each tenant with an active GSC channel:
- Fetch top 100 keywords from GSC API for yesterday
- Upsert
GSCKeywordSnapshotrows (one per keyword) - Compare yesterday’s position vs 7-day-ago snapshot for the same keyword
- Detect changes → enqueue alert notifications for significant moves
Alert tiers:
| Tier | Condition | Action |
|---|---|---|
| Critical dip | Top-10 keyword drops 3+ positions vs yesterday | Immediate notification |
| Warning dip | Any keyword drops 5+ positions vs 7-day average | Included in next daily digest |
| Positive win | Any keyword enters top 10 for the first time | Immediate notification |
| Recovery | A previously alerted keyword recovers to within 1 position of its prior best | Immediate notification |
Alerts fire through the existing enqueueNotification() infrastructure — no new notification system needed.
Alert configuration (stored as fields on a new TenantSEOSettings model or as a JSON config on the tenant):
gscAlertsEnabled(default:true)criticalDipThreshold(default:3positions, applies to top-10 keywords)warningDipThreshold(default:5positions, applies to any keyword)
Dashboard UI additions:
- GSC channel detail page (
GoogleSearchConsoleChannelDetail.tsx): add a 90-day position history sparkline per keyword row in the top keywords table, and a position trend chart for the selected keyword /ai-search/history: keyword rank trend charts alongside AI visibility trend charts — unified “search visibility over time” view
AI Prompt Research (Topic Demand)
Extend topic-researcher.worker.ts with a second phase: after discovering candidate topics, send each as a prompt to the tenant’s enabled LLM platforms and record:
- Whether any brand is cited (no citation = high opportunity)
- How many competing brands are cited (difficulty proxy)
- Intent classification of the topic
Results populate a “Topic Opportunities” tab under /ai-search — topics with high relevance and low competition in AI responses, ranked by opportunity score.
Phase 4 — AI Overview Tracking
- Evaluate ValueSERP / DataForSEO for SERP API
- Google AI Overview presence tracking per keyword
Key Files Reference
Phase 1 — LLM Visibility Foundation
| File | Change |
|---|---|
packages/db/prisma/schema.prisma | Add Competitor, AIVisibilityPlatform, TenantAIVisibilityPlatform, AIVisibilityPrompt, AIVisibilitySnapshot models |
packages/db/prisma/seed.ts | Seed 4 platforms: chatgpt, gemini, perplexity, claude |
.env.example | Add GOOGLE_GENERATIVE_AI_KEY, PERPLEXITY_API_KEY |
packages/agents/src/workers/setup.worker.ts | Add post-processing step: parse competitor-researcher output → create Competitor records |
packages/agents/src/workers/ai-visibility-monitor.worker.ts | New — reads platform config + competitor list from DB, sends prompts, parses responses, stores snapshots |
packages/queue/src/queues.ts | Register agent__ai-visibility-monitor queue |
apps/api/src/routers/competitors.ts | New — competitor CRUD (/tenant/v1/competitors) |
apps/api/src/routers/ai-visibility.ts | New — prompt CRUD + snapshot queries + run trigger + tenant platform selection |
apps/api/src/app.ts | Register competitors and ai-visibility routers (required for integration tests) |
apps/api/src/index.ts | Register competitors and ai-visibility routers (production server) |
apps/dashboard/src/app/(dashboard)/settings/competitors/ | New — competitor list management page |
apps/dashboard/src/app/(dashboard)/ai-search/ | New — overview, prompts management, settings, history pages |
apps/dashboard/src/components/sidebar.tsx | Add AI Search nav entry under SEO group; add Competitors under Settings |
Phase 2 — Competitor Gap + Sentiment
| File | Change |
|---|---|
packages/db/prisma/schema.prisma | Add AIVisibilityNarrative model |
packages/agents/src/workers/insights/brand-narrative-analyst.worker.ts | New — sentiment + attribute extraction from stored rawExcerpt snapshots |
packages/queue/src/queues.ts | Register agent__brand-narrative-analyst queue |
apps/dashboard/src/app/(dashboard)/ai-search/competitors/ | New — competitor gap analysis view with “Generate Brief” action |
Phase 3 — Prompt Research + Position History
| File | Change |
|---|---|
packages/db/prisma/schema.prisma | Add GSCKeywordSnapshot, TenantSEOSettings models |
packages/agents/src/workers/topic-researcher.worker.ts | Extend with LLM validation phase → topic opportunity scores |
packages/agents/src/workers/jobs/gsc-keywords-snapshot.job.ts | New — nightly GSC keyword snapshot + alert detection |
packages/queue/src/queues.ts | Register jobs__gsc-keywords-snapshot queue |
apps/dashboard/src/app/(dashboard)/channels/[id]/GoogleSearchConsoleChannelDetail.tsx | Add 90-day position history chart + sparklines per keyword row |
Phase 4 — AI Overview Tracking
| File | Change |
|---|---|
packages/providers/serp/ | New — SERP API provider (ValueSERP or DataForSEO) |
packages/agents/src/workers/jobs/aio-tracking.job.ts | New — Google AI Overview presence per keyword |