Providers

OpenCrust supports 15 LLM providers. Three are native implementations with provider-specific APIs. The remaining eleven use the OpenAI-compatible chat completions format and are built on top of the OpenAiProvider with custom base URLs.

All providers support streaming responses and tool use.

API Key Resolution

For every provider, API keys are resolved in this order:

  1. Credential vault - ~/.opencrust/credentials/vault.json (requires OPENCRUST_VAULT_PASSPHRASE)
  2. Config file - api_key field under the llm: section in config.yml
  3. Environment variable - provider-specific env var (listed below)

Custom Base URL

All providers support custom base URLs via the base_url configuration field. This is useful for:

  • Proxies and gateways - Route requests through a custom endpoint
  • Self-hosted models - Point to your own API server
  • Regional endpoints - Use region-specific API URLs
  • Development/testing - Connect to local or staging environments

Configuration

Add the base_url field to any provider configuration:

llm:
  custom-openai:
    provider: openai
    model: gpt-4o
    base_url: "https://my-proxy.example.com/v1"
    api_key: sk-...
  
  remote-ollama:
    provider: ollama
    model: llama3.1
    base_url: "http://192.168.1.100:11434"
  
  custom-anthropic:
    provider: anthropic
    model: claude-sonnet-4-5-20250929
    base_url: "https://my-anthropic-proxy.com"
    api_key: sk-ant-...

URL Format

  • URLs should include the protocol (http:// or https://)
  • Trailing slashes are automatically handled
  • For OpenAI-compatible providers, the /v1/chat/completions path is appended automatically
  • For Anthropic, the /v1/messages path is appended automatically
  • For Ollama, the /api/chat path is appended automatically

Validation

The setup wizard validates base URLs to ensure they:

  • Use valid HTTP/HTTPS protocols
  • Have proper URL format
  • Are not empty or malformed

Native Providers

Anthropic Claude

Claude models with native streaming (SSE) and tool use via the Anthropic Messages API.

FieldValue
Config typeanthropic
Default modelclaude-sonnet-4-5-20250929
Base URLhttps://api.anthropic.com
Env varANTHROPIC_API_KEY
llm:
  claude:
    provider: anthropic
    model: claude-sonnet-4-5-20250929
    # api_key: sk-... (or use vault / ANTHROPIC_API_KEY env var)

OpenAI

GPT models via the OpenAI Chat Completions API. Also works with Azure OpenAI or any OpenAI-compatible endpoint by overriding base_url.

FieldValue
Config typeopenai
Default modelgpt-4o
Base URLhttps://api.openai.com
Env varOPENAI_API_KEY
llm:
  gpt:
    provider: openai
    model: gpt-4o
    # base_url: https://your-azure-endpoint.openai.azure.com  # optional override

Ollama

Run local models with streaming. No API key required.

FieldValue
Config typeollama
Default modelllama3.1
Base URLhttp://localhost:11434
Env varNone
llm:
  local:
    provider: ollama
    model: llama3.1
    base_url: "http://localhost:11434"

OpenAI-Compatible Providers

These providers all use the OpenAI chat completions wire format. OpenCrust sends requests to their respective API endpoints using the standard Authorization: Bearer header.

Sansa

Regional LLM from sansaml.com.

FieldValue
Config typesansa
Default modelsansa-auto
Base URLhttps://api.sansaml.com
Env varSANSA_API_KEY
llm:
  sansa:
    provider: sansa
    model: sansa-auto

DeepSeek

FieldValue
Config typedeepseek
Default modeldeepseek-chat
Base URLhttps://api.deepseek.com
Env varDEEPSEEK_API_KEY
llm:
  deepseek:
    provider: deepseek
    model: deepseek-chat

Mistral

FieldValue
Config typemistral
Default modelmistral-large-latest
Base URLhttps://api.mistral.ai
Env varMISTRAL_API_KEY
llm:
  mistral:
    provider: mistral
    model: mistral-large-latest

Gemini

Google Gemini via the OpenAI-compatible endpoint.

FieldValue
Config typegemini
Default modelgemini-2.5-flash
Base URLhttps://generativelanguage.googleapis.com/v1beta/openai/
Env varGEMINI_API_KEY
llm:
  gemini:
    provider: gemini
    model: gemini-2.5-flash

Falcon

TII Falcon 180B via AI71.

FieldValue
Config typefalcon
Default modeltiiuae/falcon-180b-chat
Base URLhttps://api.ai71.ai/v1
Env varFALCON_API_KEY
llm:
  falcon:
    provider: falcon
    model: tiiuae/falcon-180b-chat

Jais

Core42 Jais 70B.

FieldValue
Config typejais
Default modeljais-adapted-70b-chat
Base URLhttps://api.core42.ai/v1
Env varJAIS_API_KEY
llm:
  jais:
    provider: jais
    model: jais-adapted-70b-chat

Qwen

Alibaba Qwen via DashScope international.

FieldValue
Config typeqwen
Default modelqwen-plus
Base URLhttps://dashscope-intl.aliyuncs.com/compatible-mode/v1
Env varQWEN_API_KEY
llm:
  qwen:
    provider: qwen
    model: qwen-plus

Yi

01.AI Yi Large.

FieldValue
Config typeyi
Default modelyi-large
Base URLhttps://api.lingyiwanwu.com/v1
Env varYI_API_KEY
llm:
  yi:
    provider: yi
    model: yi-large

Cohere

Cohere Command R Plus via the compatibility endpoint.

FieldValue
Config typecohere
Default modelcommand-r-plus
Base URLhttps://api.cohere.com/compatibility/v1
Env varCOHERE_API_KEY
llm:
  cohere:
    provider: cohere
    model: command-r-plus

MiniMax

FieldValue
Config typeminimax
Default modelMiniMax-Text-01
Base URLhttps://api.minimaxi.chat/v1
Env varMINIMAX_API_KEY
llm:
  minimax:
    provider: minimax
    model: MiniMax-Text-01

Moonshot

Kimi models from Moonshot AI.

FieldValue
Config typemoonshot
Default modelkimi-k2-0711-preview
Base URLhttps://api.moonshot.cn/v1
Env varMOONSHOT_API_KEY
llm:
  moonshot:
    provider: moonshot
    model: kimi-k2-0711-preview

vLLM

Self-hosted models via vLLM's OpenAI-compatible server. No API key is required unless the server is started with --api-key.

FieldValue
Config typevllm
Default model(none — must be specified)
Base URLhttp://localhost:8000
Env varVLLM_API_KEY (optional)
llm:
  my-vllm:
    provider: vllm
    model: Qwen/Qwen2.5-7B-Instruct   # model name as served by vLLM
    base_url: "http://localhost:8000"  # override if vLLM runs elsewhere
    # api_key: secret                 # only if vLLM was started with --api-key

Start vLLM with:

vllm serve Qwen/Qwen2.5-7B-Instruct --port 8000

Runtime Provider Switching

You can add or switch providers at runtime without restarting the daemon.

REST API:

# List active providers
curl http://127.0.0.1:3888/api/providers

# Add a new provider
curl -X POST http://127.0.0.1:3888/api/providers \
  -H "Content-Type: application/json" \
  -d '{"provider": "deepseek", "api_key": "sk-..."}'

WebSocket: Include an optional provider field in your message to route it to a specific provider:

{"type": "message", "content": "Hello", "provider": "deepseek"}

Webchat UI: The sidebar has a provider dropdown and API key input. Click "Save & Activate" to register a new provider at runtime. API keys are persisted to the vault when OPENCRUST_VAULT_PASSPHRASE is set.

Multiple Instances

You can configure multiple instances of the same provider type with different models or settings:

llm:
  claude-sonnet:
    provider: anthropic
    model: claude-sonnet-4-5-20250929

  claude-haiku:
    provider: anthropic
    model: claude-haiku-4-5-20251001

  gpt4o:
    provider: openai
    model: gpt-4o

  gpt4o-mini:
    provider: openai
    model: gpt-4o-mini

The first configured provider is used by default. Use the provider field in WebSocket messages or the webchat dropdown to select a specific one.