Skip to main content

Overview

Tale connects to AI models through providers — OpenAI-compatible API endpoints. Each provider has a base URL, an API key, and one or more model definitions. Out of the box, Tale ships with an OpenRouter example provider that gives access to models from OpenAI, Anthropic, Google, Mistral, Meta, and others through a single API key.

Managing providers

Providers are managed in Settings > Providers in the management UI. Admins can:
  • Add a provider with a name, display name, base URL, API key, and one or more models
  • Edit a provider to update its configuration or add/remove models
  • Delete a provider to remove it entirely
Each model definition includes an ID (must match the model name expected by the API), a display name, and one or more tags (chat, vision, embedding) that control where the model appears in the platform.

Provider files

Provider configuration is stored as JSON files in the providers/ directory inside TALE_CONFIG_DIR:
  • providers/<name>.json — public config (base URL, models, tags)
  • providers/<name>.secrets.json — SOPS-encrypted API key
You can also edit these files directly instead of using the UI. See environment reference for the TALE_CONFIG_DIR location.

Using the example provider

The repository includes a ready-to-use OpenRouter provider config in examples/providers/. To use it:
  1. Copy the example files to your config directory:
cp examples/providers/openrouter.json $TALE_CONFIG_DIR/providers/
cp examples/providers/openrouter.secrets.json $TALE_CONFIG_DIR/providers/
  1. Set your OpenRouter API key. You can get one at openrouter.ai/keys.
  2. Encrypt the secrets file with SOPS or update the API key via the UI in Settings > Providers > OpenRouter.
The example provider includes 18 models across multiple providers:
ProviderModelsTags
AnthropicClaude Opus 4.6, Sonnet 4.6, Haiku 4.5chat, vision
OpenAIGPT-5.2, GPT-5.2 Instant, GPT-5.2 Prochat, vision
GoogleGemini 3 Pro, Gemini 3 Flashchat, vision
MistralMistral Large 3, Mistral Medium 3chat
MetaLLaMA 4 Maverick, LLaMA 4 Scoutchat
DeepSeekDeepSeek V3.2chat
MoonshotKimi K2.5chat
QwenQwen3 Next 80B, Qwen3.5 35B, Qwen3 VL 32Bchat, vision

Connecting self-hosted models (BYOM)

Any inference server that exposes an OpenAI-compatible API can be used as a provider. This includes: To connect a self-hosted model:
  1. Go to Settings > Providers and click Add provider
  2. Enter a name (e.g., ollama), display name, and the base URL of your server
  3. Enter an API key (use any non-empty string if your server doesn’t require auth)
  4. Add one or more models — the model ID must match the name served by your endpoint (e.g., llama3 for Ollama)
  5. Select the appropriate tags (typically chat for language models)

Example: Ollama

{
  "displayName": "Ollama (local)",
  "baseUrl": "http://localhost:11434/v1",
  "models": [
    {
      "id": "llama3.3",
      "displayName": "LLaMA 3.3",
      "tags": ["chat"]
    },
    {
      "id": "mistral",
      "displayName": "Mistral 7B",
      "tags": ["chat"]
    }
  ]
}

Making models available in chat

After adding a provider with models, you also need to add the model IDs to the agent’s supportedModels list. Agent configurations are stored in TALE_CONFIG_DIR/agents/. Edit the relevant agent JSON file and add the exact model IDs as defined in your provider config (models[*].id):
{
  "supportedModels": [
    "llama3.3",
    "anthropic/claude-opus-4.6"
  ]
}
The IDs must match the id field in the provider’s model definition exactly. For example, if your Ollama provider defines a model with "id": "llama3.3", use "llama3.3" — not "ollama/llama3.3". Only models listed in supportedModels with the chat tag appear in the model selector dropdown.
Last modified on April 10, 2026