MaxListenersExceededWarning on Agent, Billing, Reporting Servers
Status: ✅ Fixed (2026-05-05) — process.setMaxListeners() added to all three server entry points
Servers: apps/servers/agents, apps/servers/billing, apps/servers/reporting
Symptom
All three servers print this warning on startup:
(node:58540) MaxListenersExceededWarning: Possible EventEmitter memory leak detected.
11 exit listeners added to [process]. MaxListeners is 10.
Use emitter.setMaxListeners() to increase limitRoot Cause
Each BullMQ Worker instance adds one or more exit listeners to the process EventEmitter to ensure graceful shutdown (drain queue, close connections). When a server starts more than 10 workers, the total listener count exceeds Node’s default limit of 10.
The agents server starts many workers (insight workers, action suggesters, content workers, etc.) — the count reaches 11+ quickly.
Fix Applied
process.setMaxListeners() raised in each affected server entry point immediately after createLogger():
| Server | File | Limit set |
|---|---|---|
| agents | apps/servers/agents/src/index.ts | 100 (50+ workers each add a listener) |
| billing | apps/servers/billing/src/index.ts | 30 |
| reporting | apps/servers/reporting/src/index.ts | 30 |
// e.g. agents/src/index.ts:
process.setMaxListeners(100); // 50+ BullMQ workers each add a process exit listenerThis is safe because every listener is intentional (BullMQ’s shutdown drain per worker). Raising the limit suppresses the warning without hiding real leaks — a genuine leak would need to exceed the new ceiling.
Alternatives Considered
- Shared shutdown coordinator — cleaner architecturally but requires refactoring every worker; high risk for no functional gain.
- BullMQ v5 built-in drain — already used via
worker.close()in shutdown handlers; the extra process listeners are a BullMQ implementation detail, not a user bug.