Skip to content

Providers

A provider is an LLM backend. GHOST supports multiple providers and lets you define named model aliases.

ProviderIDAuth
OpenRouteropenrouterOPENROUTER_API_KEY env var
Kimi Codekimi_codeKIMI_API_KEY env var
OpenAI OAuth (Codex)openai_oauthghost auth codex
Anthropic (OAuth)anthropicClaude Code credentials (see setup below)

Define aliases in config.toml to name your models:

~/.config/ghost/config.toml
[models]
default = "primary"
[models.primary]
provider = "openrouter"
model = "anthropic/claude-sonnet-4"
context_window = 200000
[models.fast]
provider = "kimi_code"
model = "kimi-k2.5"
context_window = 250000

:::note default specifies which alias to use when none is specified. Each alias needs provider, model, and context_window. You can optionally add headers for extra HTTP headers. :::

OpenRouter routes requests across multiple upstream providers. Use provider_routing to control which providers receive your requests — for example, to restrict to providers that support prompt caching:

~/.config/ghost/config.toml
[models.primary]
provider = "openrouter"
model = "anthropic/claude-sonnet-4"
context_window = 200000
provider_routing = { only = ["anthropic", "openai", "google", "deepseek"] }

Available fields:

FieldTypeDescription
onlystring[]Whitelist: only route to these providers
ignorestring[]Blacklist: never route to these providers
orderstring[]Preferred provider order (first = highest priority)
allow_fallbacksboolFall back when preferred providers fail
require_parametersboolOnly use providers supporting all request params

This maps directly to the OpenRouter provider preferences request field. It is ignored by other providers.

Model references can be a single alias or an ordered list. When configured as a list, GHOST tries each model in order — if the first fails with a retryable error (rate limit, server error, timeout), it automatically falls through to the next.

~/.config/ghost/config.toml
[models]
# Single alias (standard)
default = "primary"
# Or a chain with automatic fallback
default = ["primary", "fallback", "tertiary"]
[models.primary]
provider = "anthropic"
model = "claude-sonnet-4-6"
context_window = 1000000
[models.fallback]
provider = "openrouter"
model = "anthropic/claude-sonnet-4-6"
context_window = 200000
[models.tertiary]
provider = "openrouter"
model = "google/gemini-2.0-flash"
context_window = 128000

Permanent errors (authentication, model not found) stop the chain immediately — there is no point trying a fallback for a credentials problem.

Each provider in the chain has its own circuit breaker (3 consecutive failures → skip for 60 seconds), so known-bad models are skipped quickly.

Agents can also use chains:

return {
name = "my-agent",
model = {"primary", "fallback"},
-- ...
}

The Anthropic provider talks directly to the Anthropic Messages API using Claude Code’s OAuth credentials. This gives GHOST access to Claude Opus, Sonnet, and other Claude models through your existing Claude Code subscription — no separate API key needed.

Be aware this is very much against Anthropic’s ToS and they could decide to enforce their rules and ban your account.

Run Claude Code and authenticate:

Terminal window
# Civilized one-time run
nix run nixpkgs#claude-code --impure
# Barbaric global install
npm install -g @anthropic-ai/claude-code
claude

This creates ~/.claude/.credentials.json with your OAuth tokens.

~/.config/ghost/config.toml
[models.claude]
provider = "anthropic"
model = "claude-sonnet-4-6"
context_window = 1000000

Available models include claude-sonnet-4-6, claude-opus-4-6, and claude-haiku-4-5-20251001. See Anthropic’s model docs for the full list.

Terminal window
# Authenticate with OpenAI (browser-based OAuth flow)
ghost auth codex
# Check status
ghost auth status
# Revoke tokens
ghost auth revoke