Context Engineering

Guide AI Models with Examples, Not Complex Prompts

Show, don't tell. Guide AI with curated examples to improve quality and consistency.

context_learning.py
from opper import Opper
from pydantic import BaseModel
 
class EmailInput(BaseModel):
customer_message: str
tone: str
 
class EmailResponse(BaseModel):
response: str
 
# Create function with automatic example retrieval
function = opper.functions.create(
name="customer_support",
instructions="Generate helpful support responses",
input_schema=EmailInput.model_json_schema(),
output_schema=EmailResponse.model_json_schema(),
configuration={
"invocation.few_shot.count": 5 # Pull 5 relevant examples
}
)
 
# Add curated examples to guide the model
function.dataset.add_entry({
"input": {"customer_message": "I can't log in", "tone": "friendly"},
"output": {"response": "I'd be happy to help you with login issues..."}
})
 
# Model automatically uses relevant examples from dataset
result = function.call(input={"customer_message": "Payment failed", "tone": "professional"})

Trusted by leading companies

Alska
Beatly
Caterbee
GetTested
Glimja
ISEC
Ping Payments
Psyscale
Steep
Sundstark
Textfinity

Challenge

Traditional Prompting Leaves Quality to Chance

You can't describe every nuance in a prompt. Models need examples to understand what "good" actually looks like.

Inconsistent Outputs

Generic prompts produce wildly different results each time. Your AI behaves unpredictably, requiring constant manual review and adjustment.

Complex Prompt Engineering

Writing perfect prompts is an art form. Every edge case requires more instructions, making prompts unmanageable and brittle.

Slow Iteration Cycles

Every quality improvement requires updating prompts, testing edge cases, and hoping the changes don't break existing functionality. Updates are risky and time-consuming.

Can't Capture Style

You know what good output looks like, but can't teach the model to match your brand voice, format, or quality standards consistently.

The Opper Way

Show Models What You Want, Not How to Do It

In-context learning with dynamic example selection

Few-Shot Learning

Show the model 3-10 examples of perfect outputs. The model learns your exact style, format, and quality standards instantly through examples.

Semantic Example Selection

Opper automatically finds the most relevant examples from your dataset for each new task. The right context, every time, without manual selection.

Continuously Improving Datasets

Add new examples over time to handle edge cases and improve quality. Your AI gets smarter with every curated example you provide.

Schema-Based Prompting

Define input and output schemas instead of writing complex prompts. The model infers tasks from structured data, ensuring predictable outputs and automatic validation across any model.

In-Context Learning Example

Playing Tic-Tac-Toe with the Right Context

We ran a tournament where GPT-4.1-mini played against GPT-4.1 in Tic-Tac-Toe. With zero examples, the smaller model won 28% of games. After providing 10 curated examples of good gameplay, the win rate increased to 94%.

View Full Experiment

Case Study

How Beatly Automates Influencer Campaigns

Beatly uses Opper's context engineering to match brands with creators and automate campaign management — transforming from agency to subscription platform with AI-powered brief generation and asset tagging.

Ready to Improve AI Quality with Examples?

Start using in-context learning to guide your models today

Get started free View Documentation