Skip to content

Configuration

Location: ~/.config/ghost/config.toml

Override with GHOST_CONFIG_DIR environment variable.

~/.config/ghost/config.toml
# Workspace path (default: ~/GHOST)
workspace = "~/GHOST"
# Model aliases — define one or more (keys are flattened under [models])
[models]
default = "primary" # Which alias to use by default
# vision = "fast" # Optional: model for PDF vision fallback (falls back to default)
[models.primary]
provider = "openrouter"
model = "anthropic/claude-sonnet-4"
context_window = 200000
provider_routing = { only = ["anthropic", "openai", "google"] }
[models.fast]
provider = "kimi_code"
model = "kimi-k2.5"
context_window = 250000
[models.codex]
provider = "openai_oauth"
model = "gpt-5.3-codex"
context_window = 250000
# Discord
[discord]
enabled = true
allowed_user_id = "123456789012345678" # Your Discord user ID
# Embeddings (llama-server)
[embeddings]
url = "http://127.0.0.1:11434" # llama-server endpoint
model = "qwen3-embedding:8b" # Embedding model
batch_size = 32 # Vectors per batch
dimension = 1024 # Vector dimension
# Timing
[timing]
heartbeat_idle_minutes = 4 # Minutes idle before heartbeat
heartbeat_check_seconds = 60 # How often to check for idle
heartbeat_continue_minutes = 30 # Max heartbeat conversation length
reflection_idle_minutes = 4 # Minutes idle before reflection runs
scheduler_tick_seconds = 10 # Cron scheduler poll interval
# Compaction (context window management)
[compaction]
threshold = 0.85 # Compact when context is 85% full
mask_preview_chars = 100 # Preview chars for compacted messages
# Coding agent
[coding]
# model = "fast" # Optional: use a different model alias for coding
# Document conversion (Docling)
[docling]
# url = "http://127.0.0.1:5001" # Remote docling-serve URL (omit for local uv script)
# timeout = 600 # Conversion timeout in seconds
# Web
[web]
search_max_results = 5 # Default Brave search results
# crawl4ai_url = "http://localhost:11235" # Optional Crawl4AI backend
# Debug
[debug]
save_requests = false # Save raw provider requests to disk
Terminal window
# Read a config value
ghost config get discord.allowed_user_id
# Set a config value
ghost config set timing.heartbeat_idle_minutes 10

Secrets are read from ~/.config/ghost/.env or directly from the environment variables:

VariablePurpose
OPENROUTER_API_KEYOpenRouter provider
BRAVE_API_KEYWeb search
DISCORD_TOKENDiscord bot
KIMI_API_KEYKimi Code provider
OTEL_EXPORTER_OTLP_ENDPOINTOTLP collector URL (enables export)
OTEL_SERVICE_NAMEService name in traces (def: GHOST)
OTEL_EXPORTER_OTLP_HEADERSAuth headers for remote backends

GHOST uses standard OpenTelemetry for tracing. By default, traces go to the console only. Set OTEL_EXPORTER_OTLP_ENDPOINT to export to any OTLP-compatible backend.

Self-hosted with SigNoz (included in the repo):

.env
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318

Cloud backend (Logfire, Datadog, Grafana Cloud, etc.):

.env
OTEL_EXPORTER_OTLP_ENDPOINT=https://your-backend.example.com
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_TOKEN"

All gen_ai.* semantic convention fields are recorded on LLM calls (model, tokens, cache hits), so backends with LLM observability features (SigNoz, Logfire) can display them natively.