Models

Clawpy supports all major LLM providers through a unified routing layer. You bring your own API keys and choose which models power each agent in your swarm.

Supported Providers

ProviderFrontier ModelsPrevious Generations
OpenAIGPT-5.4, GPT-5.4-mini, GPT-5.4-nanoo3-pro, o3-mini, GPT-4.5-preview
AnthropicClaude Opus 4.6, Claude Sonnet 4.6Claude Haiku 4.5, Claude 3.7 Sonnet
GoogleGemini 3.1 ProGemini 3 Flash
DeepSeekDeepSeek V3.2DeepSeek V3.1, V3
MiniMaxM2.7, M2.7-highspeedM2.5 series
MoonshotKimi K2.5, K2-thinking-turboK2-turbo-preview

How Model Selection Works

Each agent in your swarm can be assigned a different model. This lets you optimise for cost and capability:

  • Fast triage models (GPT-5.4-nano, Gemini Flash) for the Guardian scanner that decides whether to wake expensive agents
  • Reasoning models (Claude Opus, o3-pro) for the Architect and Auditor roles that need deep analysis
  • Balanced models (Claude Sonnet, GPT-5.4) for everyday coding and research tasks

Select models from the Swarm Configurator dropdown in the dashboard. The backend automatically handles provider routing and API key management.

Bring Your Own Key

Clawpy uses a BYOK (Bring Your Own Key) model. You add your API keys to the encrypted Vault, and all costs are billed directly by your provider. Clawpy never proxies or marks up API calls.

To add a key, navigate to Settings → Vault in the dashboard and enter your provider credentials.

Unified Routing

Under the hood, Clawpy uses LiteLLM to normalise API calls across all providers. This means:

  • Consistent message format regardless of provider
  • Automatic retry and fallback logic
  • Token counting and budget tracking per agent and per division