LLM Providers
Moved. This content has been reorganised into the
docs/adapters/folder.
See Adapters — Index for the full adapter documentation:
- Adapters Overview — interface, factory, model allocation, fallback, adding a provider
- Claude Code CLI — subprocess I/O, NDJSON stream, session resumption
- OpenAI / Codex — REST API, SSE streaming, session history
- Ollama — local REST, NDJSON streaming, zero cost
- Webhook — async phone-home, enterprise custom runtimes
- Codex Local [To Build] — Codex CLI subprocess, GPT-5 family, ChatGPT account auth
- Gemini Local [To Build] — Gemini CLI subprocess, Gemini 3/2.5 models, Google account auth