Models
Evroc Models
We've added support for 10 new models from Evroc, including 8 chat models and 2 embedding models hosted in the EU.
- evroc/gpt-oss-120b: $0.22/M input, $0.86/M output
- evroc/phi-4-multimodal-instruct: $0.11/M input, $0.43/M output
- evroc/devstral-small-24b-instruct: $0.11/M input, $0.43/M output
- evroc/kimi-k2-thinking: $1.35/M input, $5.40/M output
- evroc/llama-3.3-70b-instruct: $1.08/M input, $1.08/M output
- evroc/qwen3-30b-instruct: $0.32/M input, $1.30/M output
- evroc/qwen3-vl-30b-instruct: $0.22/M input, $0.86/M output
- evroc/magistral-small: $0.54/M input, $2.16/M output
- evroc/qwen3-embedding-8b: $0.11/M tokens
- evroc/multilingual-e5-large-instruct: $0.11/M tokens
gpt-5.2-codex
We've added support for OpenAI's gpt-5.2-codex model, optimized for coding tasks.
- Input: $0.75/M tokens
- Output: $4.00/M tokens
cerebras/glm-4.7
We've added support for the cerebras/glm-4.7 model from Cerebras.
- Input: $2.25/M tokens
- Output: $2.75/M tokens
fireworks/minimax-m2.1
We've added support for the Minimax M2P1 model, available via fireworks/minimax-m2.1.
- Input: $0.30/M tokens
- Output: $0.20/M tokens
Node Agents SDK v0.4.0
We have released version 0.4.0 of the Opper Node.js SDK with significant improvements to usage tracking, tool definitions, and event handling.
Usage Tracking with run()
The new run() method replaces process() as the primary way to interact with agents. It returns both the agent's result and detailed usage statistics, including token breakdowns for parent and nested agents or tools.
const { result, usage } = await agent.run("Hello world");
console.log(`Used ${usage.total_tokens} tokens`);
Enhanced Tool Definitions
Tools now support outputSchema and examples, providing the LLM with better context for structured I/O and improving overall performance and reliability.
Unified Event System
The event and hook system is now fully unified. You can monitor all 17 event types (lifecycle, tool, memory, streaming) using the standard agent.on() API.
https://github.com/opper-ai/opperai-agent-sdk-node/releases
Python Agent SDK v0.3.0
This release of the Opper Python Agent SDK introduces several improvements to developer experience and observability.
Configurable Tool Timeouts
You can now configure the maximum time an agent waits for a tool to respond using the agent_tool_timeout parameter. This defaults to 120 seconds and can be adjusted or disabled based on your needs.
Enhanced Trace Observability
Span tracking now includes visual hierarchy and emojis to make it easier to follow complex agent execution paths and identify different types of operations at a glance.
Improved Streaming
User messages and agent outputs are now displayed more clearly during streaming, providing better real-time feedback for long-running operations.
https://github.com/opper-ai/opperai-agent-sdk/releases/tag/v0.3.0

