AI Agent SDK

Build production AI agents with tracing, tools, and any model

Tool calling, multi-step reasoning, and observability built in. One SDK, any model, full tracing.

agent.py
from opper_agents import Agent, tool
 
# Define a tool for your agent
@tool
def search_web(query: str) -> str:
"""Search the web and return results"""
return search_api(query)
 
# Create and run an autonomous agent
agent = Agent(
name="research-agent",
tools=[search_web],
instructions="Research topics and provide summaries"
)
 
# Agent runs autonomously with full tracing
result = await agent.process("Find the latest AI research papers")

Trusted by thousands of developers and leading companies

Alska
Beatly
Caterbee
GetTested
Glimja
ISEC
Ping Payments
Psyscale
Steep
Sundstark
Textfinity

Challenge

What makes building production AI agents so hard?

Most teams waste months building fragile agent systems from scratch, dealing with unreliable tool calling, vendor lock-in, and zero observability. There's a better way.

Unreliable Tool Calling

Models hallucinate functions or fail to execute tools correctly, requiring manual error handling

Vendor Lock-in

Switching between providers means rewriting your entire integration

Complex Orchestration

Chaining steps, handling retries, and managing state requires significant engineering effort

Zero Visibility

Agent failures provide no insight into what went wrong or how to debug

The Opper Way

Everything you need to build reliable AI agents in production

Built-in tool calling, automatic tracing, vector search, and multi-model support

MCP integration & custom tools

Connect to external tools and services using Model Context Protocol (MCP). Give your agents direct access to GitHub, Slack, Notion, databases, and more without custom integrations.

  • GitHub, Slack, Notion support
  • Database connectors
  • No custom integration code
mcp_integration.py
from opper_agents import Agent, mcp, MCPServerConfig

# Configure MCP servers
github_mcp = MCPServerConfig(
  name="github",
  transport="streamable-http",
  url="your-mcp-url"
)

# Create agent with MCP tools
agent = Agent(
  name="DevOpsBot",
  tools=[mcp(github_mcp)]
)

await agent.process("Create PR and notify team")

Span tracing & metrics

Every agent action is automatically traced with detailed spans. Attach custom metrics, track performance, and debug production issues with full visibility.

  • Automatic span generation
  • Custom metrics: span_metrics.create_metric()
  • Production debugging
See full observability and evaluations
Span Trace
Live
12:34:01.234agent.process() started
12:34:01.456@tool get_weather('Paris')
12:34:01.678span_metrics.create_metric()
12:34:01.890Response generated

Built-in vector search

Index your data and query with semantic search. Built-in RAG capabilities without managing separate vector databases. Perfect for support tickets, documentation, and knowledge bases.

  • Semantic search built-in
  • Metadata filtering
  • No separate vector DB needed
Learn about context engineering
vector_search.py
# Index your documents
index = opper.indexes.create(
  name="support-tickets"
)

index.add(
  content="Login issue...",
  metadata={"status": "open"}
)

# Semantic search with filters
results = index.query(
  query="can't sign in",
  filters={"status": "open"}
)

Any model, one API

Run agents on 200+ AI models from OpenAI, Anthropic, Google, xAI and more. Switch models without changing code. Use cerebras/gpt-oss-120b for speed or gpt-5 for quality.

  • 200+ models supported
  • Switch with model parameter
  • Automatic fallbacks
Explore the LLM gateway
model_switching.py
# Use fast model for quick responses
agent = Agent(
  name="FastBot",
  model="cerebras/gpt-oss-120b"
)

# Switch to premium for complex tasks
agent.update(
  model="openai/gpt-5"
)

# Use model-specific features
await agent.process(
  prompt="Analyze this",
  model="anthropic/claude-sonnet"
)

Real-time Streaming

Stream agent responses token-by-token for instant feedback. See tool calls and reasoning in real-time. Perfect for chatbots and interactive applications.

  • Token-by-token streaming
  • Stream tool calls & events
  • Real-time reasoning visibility
streaming.py
from opper_agents import Agent, hook, HookEvents
from opper_agents.base.context import AgentContext

# Stream responses in real-time
@hook(HookEvents.STREAM_CHUNK)
async def on_chunk(
  context: AgentContext,
  chunk_data: dict,
  accumulated: str
) -> None:
  print(chunk_data.get("delta"), end="")

agent = Agent(name="StreamBot")
await agent.process("Explain quantum computing")
How AI-BOB automates construction compliance

Case Study

How AI-BOB automates construction compliance

AI-BOB uses Opper's Task Completion API to transform plain-language building requirements into reliable, auditable compliance checks — with schema-enforced outputs, built-in evaluators, and full observability embedded directly in architects' workflows.

Ready to build reliable AI agents in production?

Join teams shipping production AI agents with Opper. Start building in minutes, not months.

Get started View Documentation