Using o1-preview and o1-mini with RAG and structured output
By Göran Sandahl -In this blog post we explore how OpenAIs reasoning models o1-mini and o1-preview perform in a RAG pipeline with structured output
In this blog post we explore how OpenAIs reasoning models o1-mini and o1-preview perform in a RAG pipeline with structured output
Three days at the AI Engineering World Fair in San Francisco, covering the what, how and why of LLMs and how to best use them.
We build a pipeline to shape the output of LLM calls with synthetic examples and few shot retrieval, and see how multiple non-frontier models perform
Exploring how to set up APIM with an OpenAI-compatible API, and how to connect it to multiple OpenAI deployments. We will also cover how to set up failover and load-balancing.
In this blog post we explore how to implement and use RAG metrics to evaluate the quality of answers generated by a model. We will use the answer correctness metric to evaluate the quality of answers generated by a model.
Delvin is an agent fixing issues from the SWE-Bench Lite dataset, achieving state-of-the-art accuracy(23%) with very simple code leveraging the Opper SDK.
In this blog post we explore how to use multimodal models in Opper. We will use the newly released model gpt-4o to generate structured data from images
In this blog post we show how to build a simple RAG feature with citations, using Opper with structured input/output and Mistral-Large.
Getting GPT 3.5 Turbo to 80% accuracy on the GSM8k benchmark by leveraging the teacher student pattern, synthetic examples and few shot retrieval.
Dramatically improving LLMs accuracy with structured generation and chain of thought
At Opper, our mission is to accelerate adoption of Generative AI by making it simpler to build production grade reasoning applications, agents and features.