
Claude Cowork
How to run Claude Cowork with third-party inference providers
A walkthrough using Opper as the gateway: configure Cowork's Developer-mode dialog, add models across AWS Bedrock, GCP Vertex, and Azure, and switch providers mid-session.
Agent CLI
A CLI router for AI agents, including a hosted Claude Code router for any model. Switch models mid-task, fall back across providers, all on one OAuth login.
Why a gateway
Every model in the catalog is tagged with where it's deployed. Configure your agent to route only to EU-hosted models (Claude on GCP Vertex EU, Sonnet on AWS Bedrock EU, open-weight on Evroc) when inference has to stay in Europe.
Browse the model catalog →The same model (Claude Sonnet, say) is wired up across AWS, Azure, and GCP. If one rate-limits or goes down mid-session, the gateway routes to the next upstream without interrupting the agent.
How fallbacks work →Not every prompt needs Opus. Run OpenCode or OpenClaw on a smaller or open-weight model for routine work (up to 98% cheaper for the same task) without leaving the CLI.
See the cost breakdown →Security
Opper never trains on customer data: code, emails, documents, or anything else your agent reads. Optional zero data retention drops prompts and completions at request time.
Log in once with OAuth. No OpenAI, Anthropic, or Google keys on your laptop. Every agent shares a single Opper-scoped key you can revoke from the dashboard at any time.
TLS in transit, AES-256 at rest, isolated data per organization. Platform deployed in AWS Stockholm with daily encrypted backups.
Install
One npm install, one OAuth login. Then launch Claude Code, Codex, OpenCode, OpenClaw, Hermes, or Pi.
FAQ
opper login, then launch Claude Code with any model:$ opper launch claude --model openai/gpt-5.5 $ opper launch claude --model anthropic/claude-opus-4-7Switch models mid-session by relaunching with a different
--modelflag. No config changes needed.