AI Providers
Connect your AI provider API keys to power the Studio and skill execution. Skilder supports Anthropic, OpenAI, Google AI, Infomaniak, and Ollama.
Skilder uses a BYOK (Bring Your Own Key) model. You connect your existing API keys — Skilder sends requests directly to the provider, no proxy or markup.
The first time you configure a provider, Skilder shows a BYOK Policy Acknowledgment confirming how keys are stored and used. This is a one-time step per workspace.
How It Works
Every provider follows the same two-step flow:
- Enter credentials — Paste your API key (and any provider-specific fields). Skilder validates them and fetches available models.
- Select a model — Pick from the list. Recommended models are marked with a star. Click Validate & Save.
Open Workspace Settings > AI Providers and click Configure on any provider card to start.
Anthropic (Claude)
Get your API key
- Go to console.anthropic.com > Settings > API Keys.
- Click Create Key, name it (e.g.,
skilder-workspace), and copy it immediately. - Keys start with
sk-ant-. Billing must be enabled.
Add to Skilder
Paste the API key > Next: Select Model > choose a model > Validate & Save.
Recommended models
| Model | Best For |
|---|---|
| Claude 3.5 Haiku | Fast, cost-effective — high-volume or simpler tasks |
| Claude 3.5 Sonnet | Balanced performance and cost — good default |
| Claude Sonnet 4 | Latest model, strong reasoning |
Other models like Claude 3 Opus remain available in the full list. Deprecated models are hidden automatically.
Troubleshooting
- "Invalid API key" — Verify the full key was copied. Keys start with
sk-ant-. - "Insufficient permissions" — Enable billing on your Anthropic account.
- No models appear — Check your plan and billing status.
OpenAI
Get your API key
- Go to platform.openai.com > Dashboard > API Keys.
- Click Create new secret key. Set permissions to All or at least Model + Chat.
- Copy immediately. Keys start with
sk-. Billing must be enabled.
Add to Skilder
Paste the API key > Next: Select Model > choose a model > Validate & Save.
Recommended models
| Model | Best For |
|---|---|
| GPT-4o | Fast, multimodal — good default for most tasks |
| GPT-4o mini | Smallest and cheapest, ideal for high-volume tasks |
| o3 | Advanced reasoning for complex analysis |
Other models like o1 remain available. Deprecated models (GPT-3.5 Turbo, dated snapshots) are hidden.
Troubleshooting
- "Invalid API key" — Verify the full key was copied. Keys start with
sk-. - "Insufficient quota" — Enable billing with available credits.
- Model not listed — Check availability under Settings > Limits in your OpenAI account.
Google AI
Get your API key
- Go to aistudio.google.com > Get API Key > Create API Key.
- Select a Google Cloud project (or let AI Studio create one).
- Copy the generated key. Free tier available with rate limits.
Add to Skilder
Paste the API key > Next: Select Model > choose a model > Validate & Save.
Recommended models
| Model | Best For |
|---|---|
| Gemini 2.0 Flash | Fast, cost-effective for everyday tasks |
| Gemini 2.5 Pro | Most capable, complex reasoning and long context |
| Gemini 2.5 Flash | Balanced performance across a wide range of tasks |
Deprecated models (Gemini 1.0, legacy gemini-pro, 1.5 previews) are hidden. Other current models remain available.
Troubleshooting
- "API key not valid" — Ensure the Generative AI API is enabled on the associated Google Cloud project.
- "Quota exceeded" — Enable billing for higher throughput.
- Model not listed — Some models require specific API versions or regional availability.
Infomaniak
Swiss-hosted AI models with data sovereignty guarantees. Your data stays in Switzerland and is not used for model training.
Get your credentials
Infomaniak requires both an API key and a Product ID:
- API key: manager.infomaniak.com > Developer Tools > API Tokens > Create a token.
- Product ID: Navigate to your AI Tools product — the numeric identifier shown in the overview (e.g.,
123456).
Add to Skilder
Paste your API key + Product ID. Select the API Version:
- V1 (Stable) — Llama, Qwen, Mistral
- V2 (Beta) — Apertus, GPT-OSS
Then Next: Select Model > choose a model > Validate & Save.
Recommended models
| Model | API Version | Best For |
|---|---|---|
| Mistral | V1 | Fast instruction-following with Swiss data sovereignty |
| GPT-OSS/120B | V2 | Large open-source model for complex tasks |
| Apertus | V2 | Experimental architecture with newer capabilities |
Troubleshooting
- "Invalid credentials" — Verify both the API key and numeric Product ID.
- No models appear — Check that your AI product is active and the token has correct permissions.
- Wrong models — Check the API version (V1 vs V2).
Ollama (Local Models)
Run open-source models locally. No API key or cloud account needed.
Set up Ollama
- Download from ollama.com/download (macOS, Linux, Windows).
- Pull a model:
ollama pull llama3.2 - Verify it's running:
ollama list
Add to Skilder
Enter the Ollama URL (defaults to http://localhost:11434) > Next: Select Model > pick a model > Validate & Save.
Popular models
| Model | Pull Command | Best For |
|---|---|---|
| Llama 3.2 | ollama pull llama3.2 | General-purpose, balanced quality and speed |
| Mistral | ollama pull mistral | Fast instruction-following |
| Phi-3 | ollama pull phi3 | Small and fast, constrained environments |
| Qwen 2.5 | ollama pull qwen2.5 | Multilingual and coding |
Browse more at ollama.com/library. After pulling a new model, click Configure on the Ollama card to refresh the list.
Troubleshooting
- "Connection refused" — Start Ollama with
ollama serveor the desktop app. - No models appear — Pull at least one model first.
- Custom URL not working — Check that
OLLAMA_HOSTallows external connections. - Slow responses — Smaller models (Phi-3, Mistral) run faster without a GPU.
Updating or Removing a Provider
- Update: Click Configure on the provider card, enter new credentials, re-select a model, and save.
- Remove: Click Configure > Remove. Credentials are deleted immediately.
After removing a provider, the Studio and any skills using it stop working until you connect a replacement.
Verifying Your Connection
- Open the Studio and send a test message.
- Or use the built-in model tester on the AI Providers settings page.
Next Steps
- Studio — Test your connection by chatting with the configured model.
- Skills — Build capabilities that use your AI provider.
- Connect Your Agent — Link an external MCP client.