Task Completion API

Structured AI outputs with schema-first task completion

Define tasks with schemas and get structured, validated AI outputs. No prompt engineering needed.

task_completion.py
from opperai import Opper
from pydantic import BaseModel, Field
 
opper = Opper()
 
class QueryInput(BaseModel):
facts: list[str] = Field(description="Facts to query")
question: str
 
class QueryOutput(BaseModel):
answer: str
reasoning: str
 
response = opper.call(
name="fact_query",
instructions="Answer using the provided facts",
input_schema=QueryInput,
output_schema=QueryOutput,
input={
"facts": ["Jupiter is the largest planet"],
"question": "What is the largest planet?"
}
)
 
print(response.json_payload)
# {'answer': 'Jupiter', 'reasoning': '...'}

Trusted by thousands of developers and leading companies

Alska
Beatly
Caterbee
GetTested
Glimja
ISEC
Ping Payments
Psyscale
Steep
Sundstark
Textfinity

Challenge

Why do unstructured prompts fail in production?

Free-form prompts are fine for demos. But in production, you need structure, validation, and reliability.

Unpredictable Outputs

Free-form prompts produce inconsistent results that require constant manual validation. One run works perfectly, the next completely fails—with the same input.

Validation Nightmare

No guarantee of output structure. Parsing errors, missing fields, wrong types. You spend more time handling edge cases than building features.

Prompt Brittleness

Complex prompts break when requirements change. Hard to maintain, impossible to version, and debugging feels like guesswork.

No Observability

Black box completions with no insight into what went wrong or how to improve. You're flying blind in production.

The Opper Way

Schema-first task completion

Declarative, validated, and production-ready out of the box

Schema-Based Prompting

Type safety for AI outputs

Define tasks with input/output schemas instead of complex prompts. Field descriptions guide the model, automatic validation ensures structure, predictable outputs.

  • Pydantic and Zod schema support
  • Automatic validation with regex, enums, and literals
  • Field-level descriptions replace prompt engineering
Read: Introduction to schema-based prompting
Schema Definition
class TaskOutput(BaseModel):
    classification: Literal["easy", "medium", "hard"]
    answer: str = Field(
        description="Answer starting with 'The answer is'",
        pattern=r"^The answer is [A-Za-z0-9s]+$"
    )

Schemas provide structure and validation—no parsing errors, no unexpected formats.

Multi-Model Intelligence

Automatic fallback & retries

Specify multiple models with automatic fallback. Start with fast, cheap models, fall back to powerful ones if needed. Automatic retries and optimization.

  • Model cascade with automatic fallback
  • Retry logic handles transient failures
  • Model-specific configuration per task
Route across 200+ models with our gateway
Model Fallback Configuration
model = [
    {
        "name": "openai/gpt-4.1-nano",
        "options": {"temperature": 0.1}
    },
    "openai/gpt-4o-mini",  # fallback
    "openai/gpt-4.1"       # final fallback
]

# Platform handles retries automatically
response = opper.call(
    name="task",
    model=model,
    input_schema=Input,
    output_schema=Output
)

Define model cascades with automatic fallback—cost-efficient with built-in reliability.

Example-Guided Completions

Show edge cases, get consistency

Provide 3-10 examples to guide model behavior on edge cases. Platform automatically selects most relevant examples from your dataset for each task.

  • Few-shot learning with 3-10 examples
  • Automatic semantic example selection
  • Handle edge cases with curated examples
Few-Shot Example
examples = [{
    "input": {
        "facts": [...],
        "question": "How many planets?"
    },
    "output": {
        "thoughts": "No relevant facts provided...",
        "answer": "The answer is unknown"
    }
}]

# Platform uses examples to handle edge cases
response = opper.call(
    name="task",
    examples=examples,
    input={...}
)

Examples teach the model how to handle edge cases—like missing data or ambiguous inputs.

Server-Side Function Management

Server-side configuration

Centralize task configuration server-side. Version control prompts, manage datasets, A/B test, deploy changes without code deployments.

  • Manage tasks and examples outside code
  • Version control for all configurations
  • A/B test models and prompts in production
Function Management Dashboard
kb_query
Version 2.3 • Production
Active
Model
gpt-5-mini
Examples
8 curated
Dataset
432 entries
Quality
94.2%
How AI-BOB automates construction compliance

Case Study

How AI-BOB automates construction compliance

AI-BOB uses Opper's Task Completion API to transform plain-language building requirements into reliable, auditable compliance checks — with schema-enforced outputs, built-in evaluators, and full observability embedded directly in architects' workflows.

Ready to get structured, reliable AI outputs?

Structured, validated outputs with automatic retries and observability. Production-ready in hours.

Get started View Documentation