Configuration
Config File
Section titled “Config File”Location: ~/.config/ghost/config.toml
Override with GHOST_CONFIG_DIR environment variable.
Full Example
Section titled “Full Example”# Workspace path (default: ~/GHOST)workspace = "~/GHOST"
# Model aliases — define one or more (keys are flattened under [models])[models]default = "primary" # Which alias to use by default# vision = "fast" # Optional: model for PDF vision fallback (falls back to default)
[models.primary]provider = "openrouter"model = "anthropic/claude-sonnet-4"context_window = 200000provider_routing = { only = ["anthropic", "openai", "google"] }
[models.fast]provider = "kimi_code"model = "kimi-k2.5"context_window = 250000
[models.codex]provider = "openai_oauth"model = "gpt-5.3-codex"context_window = 250000
# Discord[discord]enabled = trueallowed_user_id = "123456789012345678" # Your Discord user ID
# Embeddings (llama-server)[embeddings]url = "http://127.0.0.1:11434" # llama-server endpointmodel = "qwen3-embedding:8b" # Embedding modelbatch_size = 32 # Vectors per batchdimension = 1024 # Vector dimension
# Timing[timing]heartbeat_idle_minutes = 4 # Minutes idle before heartbeatheartbeat_check_seconds = 60 # How often to check for idleheartbeat_continue_minutes = 30 # Max heartbeat conversation lengthreflection_idle_minutes = 4 # Minutes idle before reflection runsscheduler_tick_seconds = 10 # Cron scheduler poll interval
# Compaction (context window management)[compaction]threshold = 0.85 # Compact when context is 85% fullmask_preview_chars = 100 # Preview chars for compacted messages
# Coding agent[coding]# model = "fast" # Optional: use a different model alias for coding
# Document conversion (Docling)[docling]# url = "http://127.0.0.1:5001" # Remote docling-serve URL (omit for local uv script)# timeout = 600 # Conversion timeout in seconds
# Web[web]search_max_results = 5 # Default Brave search results# crawl4ai_url = "http://localhost:11235" # Optional Crawl4AI backend
# Debug[debug]save_requests = false # Save raw provider requests to diskCLI Access
Section titled “CLI Access”# Read a config valueghost config get discord.allowed_user_id
# Set a config valueghost config set timing.heartbeat_idle_minutes 10Secrets
Section titled “Secrets”Secrets are read from ~/.config/ghost/.env or directly from the environment variables:
| Variable | Purpose |
|---|---|
OPENROUTER_API_KEY | OpenRouter provider |
BRAVE_API_KEY | Web search |
DISCORD_TOKEN | Discord bot |
KIMI_API_KEY | Kimi Code provider |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP collector URL (enables export) |
OTEL_SERVICE_NAME | Service name in traces (def: GHOST) |
OTEL_EXPORTER_OTLP_HEADERS | Auth headers for remote backends |
Observability
Section titled “Observability”GHOST uses standard OpenTelemetry for tracing. By default,
traces go to the console only. Set OTEL_EXPORTER_OTLP_ENDPOINT to export to any
OTLP-compatible backend.
Self-hosted with SigNoz (included in the repo):
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318Cloud backend (Logfire, Datadog, Grafana Cloud, etc.):
OTEL_EXPORTER_OTLP_ENDPOINT=https://your-backend.example.comOTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_TOKEN"All gen_ai.* semantic convention fields are recorded on LLM calls (model, tokens,
cache hits), so backends with LLM observability features (SigNoz, Logfire) can display
them natively.