Models
Clawpy supports all major LLM providers through a unified routing layer. You bring your own API keys and choose which models power each agent in your swarm.
Supported Providers
| Provider | Frontier Models | Previous Generations |
|---|---|---|
| OpenAI | GPT-5.4, GPT-5.4-mini, GPT-5.4-nano | o3-pro, o3-mini, GPT-4.5-preview |
| Anthropic | Claude Opus 4.6, Claude Sonnet 4.6 | Claude Haiku 4.5, Claude 3.7 Sonnet |
| Gemini 3.1 Pro | Gemini 3 Flash | |
| DeepSeek | DeepSeek V3.2 | DeepSeek V3.1, V3 |
| MiniMax | M2.7, M2.7-highspeed | M2.5 series |
| Moonshot | Kimi K2.5, K2-thinking-turbo | K2-turbo-preview |
How Model Selection Works
Each agent in your swarm can be assigned a different model. This lets you optimise for cost and capability:
- Fast triage models (GPT-5.4-nano, Gemini Flash) for the Guardian scanner that decides whether to wake expensive agents
- Reasoning models (Claude Opus, o3-pro) for the Architect and Auditor roles that need deep analysis
- Balanced models (Claude Sonnet, GPT-5.4) for everyday coding and research tasks
Select models from the Swarm Configurator dropdown in the dashboard. The backend automatically handles provider routing and API key management.
Bring Your Own Key
Clawpy uses a BYOK (Bring Your Own Key) model. You add your API keys to the encrypted Vault, and all costs are billed directly by your provider. Clawpy never proxies or marks up API calls.
To add a key, navigate to Settings → Vault in the dashboard and enter your provider credentials.
Unified Routing
Under the hood, Clawpy uses LiteLLM to normalise API calls across all providers. This means:
- Consistent message format regardless of provider
- Automatic retry and fallback logic
- Token counting and budget tracking per agent and per division