Agent CLI

Run Claude Code, Codex, and more on any model

A CLI router for AI agents — EU-hosted models, automatic fallbacks, and pay-as-you-go inference.

View on GitHub →
Claude Code logo

Claude Code

Anthropic's coding agent

Codex logo

Codex

OpenAI's coding agent

OpenCode logo

OpenCode

SST's open-source coding agent

OpenClaw logo

OpenClaw

Open-source personal AI agent

Hermes logo

Hermes

Nous's open-source agent

Pi logo

Pi

Personal AI from Inflection

Why a gateway

Residency, fallbacks, cost

EU data residency

Every model in the catalog is tagged with where it's deployed. Configure your agent to route only to EU-hosted models — Claude on GCP Vertex EU, Sonnet on AWS Bedrock EU, open-weight on Evroc — when inference has to stay in Europe.

Browse the model catalog →

Automatic fallbacks

The same model — Claude Sonnet, say — is wired up across AWS, Azure, and GCP. If one rate-limits or goes down mid-session, the gateway routes to the next upstream without interrupting the agent.

How fallbacks work →

Cheaper for routine work

Not every prompt needs Opus. Run OpenCode or OpenClaw on a smaller or open-weight model for routine work — up to 98% cheaper for the same task — without leaving the CLI.

See the cost breakdown →

Install

Install the Opper Agent CLI

One npm install, one OAuth login — then launch Claude Code, Codex, OpenCode, OpenClaw, Hermes, or Pi.

terminal
# Install (Node 20.12+)
# Sign in (OAuth device flow)
# Launch an agent
# Or wire your editor

FAQ

Claude Code & Opper Agent CLI FAQ

How do I change the model or use a different LLM in Claude Code?

+
Install the Opper CLI, run opper login, then launch Claude Code with any model:
$ opper launch claude --model openai/gpt-5.5
$ opper launch claude --model anthropic/claude-opus-4-7
Switch models mid-session by relaunching with a different--modelflag — no config changes needed.

How does this compare to claude-code-router or OpenRouter?

+
Claude Code Router is an open-source proxy you run and maintain locally. OpenRouter is a hosted inference gateway. The Opper Agent CLI is a hosted CLI that wraps both: install once, log in once, then 'opper launch <agent>' routes Claude Code (and Codex, OpenCode, Hermes, Pi, OpenClaw) through Opper's gateway — with EU residency, automatic fallbacks, and pay-as-you-go billing. No local proxy to maintain.

How is this different from running each agent directly?

+
Most agents need their own provider keys, billing accounts, and rate-limit handling. With Opper you log in once and every agent shares the same key, the same 300+ model catalog, and the same observability — so switching from Claude Code to Codex to Hermes doesn't mean re-configuring your environment.

Do I need separate API keys for OpenAI, Anthropic, etc.?

+
No. Every supported agent runs through Opper's gateway, which routes to 300+ models with one Opper API key.

Is Claude Code free? How much does it cost via Opper?

+
The Opper Agent CLI is free. Inference is pay-as-you-go: provider cost + 3% gateway fee, billed to your Opper account. Claude Code via Opper costs the same as Anthropic's direct rate plus a small gateway margin. See full pricing.

Run any AI agent on any model

One CLI for Claude Code, Codex, OpenCode, and 300+ models — all on Opper's gateway.

Get startedView Documentation