Introducing Opper Agent SDKs - Framework for reliable, headless agents
By Göran Sandahl -
We're excited to announce the release of our new Opper Agent SDK (Python & TypeScript – a powerful framework designed specifically for building intelligent, headless agents that can reason, act, and collaborate seamlessly.
They are built on top of the Opper Task Completion API, which offers model interoperability and in context learning to steer your agents patterns with feedback.
Whether you're building research assistants, automation tools, or complex multi-agent systems, our SDKs provide the foundation you need to create production-ready AI agents.
Getting Started: Your First Agent in 3 Steps
Building an agent with our SDK is remarkably simple. Here's everything you need to get started:
from opper_agents import Agent, tool
# 1. Define your tools
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is sunny"
# 2. Create the agent
agent = Agent(
name="WeatherBot",
description="Helps with weather queries",
tools=[get_weather],
model="cerebras/gpt-oss-120b"
)
# 3. Run it
result = await agent.process("What's the weather in Paris?")
That's it! Just set your OPPER_API_KEY
environment variable and you're ready to go.
Extend Agent with Tools
One of our key differentiators is seamless integration with the Model Context Protocol (MCP), allowing your agents to connect to external services and data sources with minimal effort.
Custom tools
You can create custom tools:
@tool
def analyze_data(data: str) -> dict:
"""Analyze data and return insights."""
# Your analysis logic here
return {"insights": ["trend_up", "anomaly_detected"]}
@tool
def send_alert(message: str, priority: str = "medium") -> str:
"""Send an alert notification."""
# Your alerting logic here
return f"Alert sent: {message}"
MCP tools
You can connect to MCP servers for tools:
from opper_agents import Agent, mcp, MCPServerConfig
# Configure filesystem access
filesystem_server = MCPServerConfig(
name="filesystem",
transport="stdio",
command="docker",
args=["run", "-i", "--rm", "-v", f"{os.getcwd()}:/workspace",
"node:20", "npx", "-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
)
# Configure database access
sqlite_server = MCPServerConfig(
name="sqlite",
transport="stdio",
command="uvx",
args=["mcp-server-sqlite", "--db-path", "./data.db"]
)
# Create agent with MCP tools
agent = Agent(
name="DataAgent",
description="Agent with filesystem and database capabilities",
tools=[mcp(filesystem_server), mcp(sqlite_server)]
)
Choosing the Right Model
One of the key advantages of the Opper Agent SDK is model interoperability - you can easily switch between different language models to optimize for your specific use case. The SDK supports all major model providers through Opper's unified API.
Model Selection Strategy
Different models excel at different types of tasks. You can test agents with different models by passing the model parameter. You can also test models in the Opper platform UI.
# For complex reasoning and planning tasks
reasoning_agent = Agent(
name="ReasoningAgent",
model="openai/gpt-5", # Best for complex multi-step reasoning
tools=[complex_analysis, strategic_planning]
)
# For fast, cost-effective tasks
quick_agent = Agent(
name="QuickAgent",
model="cerebras/gpt-oss-120b", # Ultra-fast inference
tools=[simple_classification, data_extraction]
)
# For specialized coding tasks
code_agent = Agent(
name="CodeAgent",
model="anthropic/claude-sonnet-4.5", # Excellent for code generation
tools=[code_analysis, refactoring]
)
Extend Agent logic with Hooks
One of the most powerful features of our SDK is the comprehensive hook system that lets you intercept and process agent behavior at any point in the execution lifecycle.
Example: Print Agent reasoning
Here we interject the Agent as it has done a round of thinking:
@hook("think_end")
async def on_think_end(
context: AgentContext,
agent: BaseAgent,
thought: any
) -> None:
"""Monitor the agent's reasoning process."""
print(f"Agent thoughts: {thought.reasoning[:100]}...")
# Create agent
agent = Agent(
name="MyAgent",
description="This is my Agent",
hooks=[on_think_end]
)
Example: Clean verbose Tool outputs
Here we interject the agent after it has executed a tool. In this case we implement a simple operation to clean tool results (such as the full web page content to the facts we are looking for).
@hook("tool_result")
async def clean_tool_output(
context: AgentContext,
agent: Agent,
tool,
result: ToolResult,
) -> None:
# We use Opper AI to clean the tool output
from opperai import Opper
opper = Opper(http_bearer=os.getenv("OPPER_API_KEY"))
output = opper.call(
name="clean_tool_output",
instructions="Clean tool output from irrelevant data and keep only the information that is essential to the task",
input={
"goal": context.goal,
"tool_name": tool.name,
"tool_result": result.result,
},
model="gcp/gemini-flash-latest",
)
result.result = output.message
Sub-Agents: Agents as tools
We can also use agents as tools within other agents. This allows for creating sophisticated multi-agent systems with clean delegation patterns.
Simple Agent Delegation
# Create specialized agents
math_agent = Agent(
name="MathAgent",
description="Performs mathematical calculations",
instructions="Always show your work step by step.",
tools=[calculate, solve_equation]
)
research_agent = Agent(
name="ResearchAgent",
description="Researches and explains concepts",
instructions="Provide clear, detailed explanations.",
tools=[search_web, summarize_content]
)
# Use them as tools in a coordinator
coordinator = Agent(
name="Coordinator",
description="Delegates tasks to specialized agents",
tools=[
math_agent.as_tool(),
research_agent.as_tool()
]
)
# The coordinator can now delegate tasks
result = await coordinator.process(
"Calculate the compound interest for $1000 at 5% for 10 years, "
"then research the history of compound interest"
)
Built-in Tracing and Observability
Every agent execution is automatically traced through Opper's observability system. You get:
- Complete execution traces showing the full reasoning process
- Tool execution metrics with timing and success rates
- Token usage tracking for cost monitoring
- Span hierarchy showing agent delegation patterns
- Real-time monitoring through the Opper Dashboard
# Automatic tracing - no code needed!
agent = Agent(name="MyAgent", tools=[...])
result = await agent.process("My task")
# View traces at https://platform.opper.ai
# See complete execution flow, tool calls, and performance metrics
Production-Ready Features
Our SDKs are built for production use with:
- Type Safety: Full Pydantic model validation throughout
- Error Handling: Robust error handling with graceful degradation
- Memory Management: Optional persistent memory for long-running agents
- Async-First: Built for high-performance async operations
- Extensible Architecture: Easy to build custom agent types
- Comprehensive Testing: 90%+ test coverage with extensive examples
Getting Started Today
Ready to build your first headless agent? Here's how to get started:
-
Install the SDK:
pip install opper-agents
-
Get your API key at platform.opper.ai
-
Set your environment:
export OPPER_API_KEY="your-api-key"
-
Run the examples:
git clone https://github.com/opper-ai/opperai-agent-sdk.git cd opperai-agent-sdk python examples/01_getting_started/01_first_agent.py