
Claude Cowork
How to run Claude Cowork with third-party inference providers
A walkthrough using Opper as the gateway: configure Cowork's Developer-mode dialog, add models across AWS Bedrock, GCP Vertex, and Azure, and switch providers mid-session.
Agent CLI
A CLI router for AI agents — EU-hosted models, automatic fallbacks, and pay-as-you-go inference.
Anthropic's coding agent
OpenAI's coding agent
SST's open-source coding agent
Open-source personal AI agent
Nous's open-source agent
Personal AI from Inflection
Why a gateway
Every model in the catalog is tagged with where it's deployed. Configure your agent to route only to EU-hosted models — Claude on GCP Vertex EU, Sonnet on AWS Bedrock EU, open-weight on Evroc — when inference has to stay in Europe.
Browse the model catalog →The same model — Claude Sonnet, say — is wired up across AWS, Azure, and GCP. If one rate-limits or goes down mid-session, the gateway routes to the next upstream without interrupting the agent.
How fallbacks work →Not every prompt needs Opus. Run OpenCode or OpenClaw on a smaller or open-weight model for routine work — up to 98% cheaper for the same task — without leaving the CLI.
See the cost breakdown →Install
One npm install, one OAuth login — then launch Claude Code, Codex, OpenCode, OpenClaw, Hermes, or Pi.
FAQ
opper login, then launch Claude Code with any model:$ opper launch claude --model openai/gpt-5.5 $ opper launch claude --model anthropic/claude-opus-4-7Switch models mid-session by relaunching with a different
--modelflag — no config changes needed.