Skip to Content
ProvidersOpenAI

OpenAI

Category: AI / LLM
Adapter: OpenAIAdapter in packages/agent-core/src/adapters/openai.ts
External SDK: openai (official Node.js SDK)


Purpose

OpenAI is an alternative LLM backend for tenants or platform operators who prefer GPT-4o, o1, or Codex over Claude. The platform adapter supports SSE streaming via the OpenAI Chat Completions API and manually manages conversation history (since OpenAI has no equivalent to Claude’s session resumption).

Tenants on Enterprise plans may configure their own OpenAI API key so agent runs bill to their account instead of the platform’s. This is especially useful for clients who already have OpenAI enterprise agreements with negotiated pricing.


Config Structure

Platform config (env vars, used when tenant has no custom key)

OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx OPENAI_DEFAULT_MODEL=gpt-4o OPENAI_EXPENSIVE_MODEL=o1 OPENAI_BASE_URL=https://api.openai.com/v1 # Override for Azure OpenAI

Per-tenant config (stored in integrations table, provider = 'openai')

interface OpenAIIntegrationConfig { apiKey: string; // Tenant's own OpenAI API key model?: string; // Override default model (e.g. "gpt-4o", "o1") baseUrl?: string; // Override for Azure OpenAI endpoint orgId?: string; // OpenAI organization ID (optional) }

Integration Pattern

Adapter class (packages/agent-core/src/adapters/openai.ts)

The OpenAI adapter implements the same LLMAdapter interface as the Claude adapter, but uses SSE streaming via the REST API instead of subprocess NDJSON:

import OpenAI from 'openai'; class OpenAIAdapter implements LLMAdapter { private client: OpenAI; private history: OpenAI.Chat.ChatCompletionMessageParam[] = []; constructor( private apiKey: string, private model: string, private baseUrl: string, private orgId?: string, ) { this.client = new OpenAI({ apiKey, baseURL: baseUrl, organization: orgId }); } async *run(systemPrompt: string, userPrompt: string): AsyncGenerator<LLMEvent> { // Append to manual history for session-like behaviour this.history.push({ role: 'user', content: userPrompt }); const stream = await this.client.chat.completions.create({ model: this.model, max_tokens: 8192, messages: [ { role: 'system', content: systemPrompt }, ...this.history, ], tools: this.buildToolDefinitions(), stream: true, }); let assistantContent = ''; for await (const chunk of stream) { const delta = chunk.choices[0]?.delta; if (delta?.content) { assistantContent += delta.content; yield { type: 'content', text: delta.content }; } if (delta?.tool_calls) { for (const call of delta.tool_calls) { const result = await this.dispatchTool(call); yield { type: 'tool_result', name: call.function.name, result }; } } } // Save assistant reply to history this.history.push({ role: 'assistant', content: assistantContent }); } }

Key differences from Claude adapter

AspectClaude (CLI)OpenAI (REST)
Session managementClaude manages via --resumeAdapter manually maintains history[]
Tool routingClaude handles autonomouslyAdapter must call tools and inject results
Streaming formatNDJSON (one JSON object per line)SSE (data: {...} chunks)
Skill injection--add-dir on CLIMust include skill content in system prompt
Session persistenceactivities.claude_session_idactivities.llm_history (JSON)

Session (history) persistence

Since OpenAI has no session resumption, conversation history is stored on the activities row as a JSON array of ChatCompletionMessageParam objects:

// After activity completes await db.update(activities) .set({ llmHistory: JSON.stringify(adapter.history) }) .where(eq(activities.id, activityId)); // Next activity in pipeline — load history const prev = await db.select({ llmHistory: activities.llmHistory }) .from(activities) .where(eq(activities.id, prevActivityId)); adapter.history = JSON.parse(prev.llmHistory ?? '[]');

Azure OpenAI

For enterprise tenants using Azure OpenAI, set baseUrl to the Azure endpoint:

https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>

The OpenAI SDK accepts this via the baseURL constructor option.


Tool Definitions

Unlike Claude Code CLI (which provides tools natively), the OpenAI adapter must define tools in the request and handle dispatch manually:

private buildToolDefinitions(): OpenAI.Chat.ChatCompletionTool[] { return [ { type: 'function', function: { name: 'web_search', description: 'Search the web for current information', parameters: { type: 'object', properties: { query: { type: 'string' } }, required: ['query'], }, }, }, { type: 'function', function: { name: 'rag_search', description: 'Search the client knowledge base for relevant context', parameters: { type: 'object', properties: { query: { type: 'string' }, datasetIds: { type: 'array', items: { type: 'string' } }, }, required: ['query'], }, }, }, ]; }

Cost Calculation

OpenAI token costs are calculated from the final usage object in the stream:

// From stream's final chunk const usage = stream.usage; // { prompt_tokens, completion_tokens } const usdCost = (usage.prompt_tokens / 1_000_000 * INPUT_COST_PER_MTok) + (usage.completion_tokens / 1_000_000 * OUTPUT_COST_PER_MTok);

Model cost reference: packages/agent-core/src/costs.ts.


o1 Model Considerations

OpenAI’s o1 family (reasoning models) have different API behaviour:

  • max_tokensmax_completion_tokens
  • temperature is not supported (always 1)
  • system messages may not be supported on some o1 variants — use developer role instead
  • Reasoning tokens (completion_tokens_details.reasoning_tokens) are billed but not visible in output

The adapter handles these differences via a isO1Model(model) guard.


Test Cases

Unit tests (packages/agent-core/src/adapters/openai.test.ts)

TestApproach
Streams content chunksMock client.chat.completions.create; yield delta chunks; assert content events
Handles tool call delta and dispatches toolMock tool call delta; assert dispatchTool called; result injected
Appends assistant reply to historyAssert history has assistant message after run
Loads previous history correctlySeed history; assert messages included in API call
Cost calculated from final usageMock final chunk with usage; assert usdCost correct
Azure base URL passed throughAssert baseURL set to Azure endpoint in SDK constructor

Integration tests

TestApproach
Full stream against OpenAI APIRequires OPENAI_API_KEY in CI; assert content returned
Tool call round-tripPrompt that triggers web_search; assert tool result in history

© 2026 Leadmetrics — Internal use only