Skip to Content
Serversservers/search-indexer — @leadmetrics/server-search-indexer

servers/search-indexer — @leadmetrics/server-search-indexer

A dedicated Node.js background service that consumes search__sync BullMQ jobs and keeps the Typesense search index in sync with the PostgreSQL database. It bootstraps all 13 Typesense collections on startup and upserts or deletes documents as records change.

Source: apps/servers/search-indexer/


Why a Separate Service

ConcernReason
IsolationTypesense failures never affect API response times
Async indexingAPI routes enqueue jobs and return immediately; indexing happens in background
Independent scalingIndex throughput scales by replica count
Single bootstrapbootstrapCollections() runs once at startup rather than on every API request

Architecture

API route / server action └─ enqueueSearchSync({ collection, operation, recordId, tenantId }) Redis (BullMQ) — queue: search__sync apps/servers/search-indexer └─ sync.worker.ts ├─ upsert: db.<model>.findUnique (scoped select) → Typesense upsert └─ delete: Typesense delete (404 swallowed — idempotent)

Supported Collections (13)

CollectionPrisma modelIndexed fields
blogsBlogPosttitle, metaDescription, status, tenantId
social_postsSocialPostbodyText, engagementHook, platform, status, tenantId
landing_pagesLandingPagetitle, metaDescription, status, tenantId
newslettersEmailNewslettersubject, previewText, status, tenantId
activitiesActivitylabel, notes, type, status, tenantId
campaignsCampaignname, status, tenantId
content_briefsContentBrieftitle, topic, angle, status, tenantId
contactsContactname, email, company, stage, tenantId
leadsLeadname, company, jobTitle, status, tenantId
keywordsKeywordkeyword, source, tenantId
reportsReportlabel, tenantId
backlinksBacklinksourceDomain, anchorText, status, tenantId
tenantsTenantname, website, pocName, industry, status

Each findUnique uses a scoped select containing only the indexed fields plus id, createdAt, updatedAt — large text fields (e.g. BlogPost.htmlContent) are never fetched.


Sync Worker (src/workers/sync.worker.ts)

Queue: search__sync | Concurrency: SYNC_WORKER_CONCURRENCY (default 10)

Upsert flow:

1. Fetch record from Prisma with scoped select 2. If not found: log warn + return (soft skip — record may have been deleted) 3. Build document: { id, updatedAt (ms), ...FIELD_MAP fields } 4. client.collections(collection).documents().upsert(document)

Delete flow:

1. client.collections(collection).documents(recordId).delete() 2. 404 → swallow (already deleted — idempotent) 3. Other errors → rethrow (BullMQ will retry)

Config (src/config.ts)

VariableRequiredDefaultDescription
DATABASE_URLyesPrisma connection string
REDIS_URLnoredis://localhost:6379BullMQ connection
TYPESENSE_URLnohttp://localhost:8108Typesense server URL
TYPESENSE_ADMIN_API_KEYyesTypesense admin key
SYNC_WORKER_CONCURRENCYno10Worker concurrency

Startup Flow

  1. Load config via Zod safeParse (fails fast with formatted error on missing vars).
  2. bootstrapCollections() — creates all 13 Typesense collections if they don’t exist.
  3. Connect IORedis with maxRetriesPerRequest: null.
  4. Start sync worker.
  5. Graceful shutdown on SIGTERM/SIGINT: drain worker (worker.close()) then redis.quit().

File Structure

apps/servers/search-indexer/ |-- src/ | |-- index.ts Entry point — bootstrap, Redis, worker, shutdown | |-- config.ts Zod env validation | +-- workers/ | +-- sync.worker.ts BullMQ worker: upsert/delete documents in Typesense | |-- .env |-- package.json +-- tsconfig.json

© 2026 Leadmetrics — Internal use only