Takeaways from AI Engineer World Fair, San Fransisco 2024

By Göran Sandahl -

Three days at the AI Engineering World Fair in San Francisco, covering the what, how and why of LLMs and how to best use them.

Using Examples and Few Shot Retrieval to Shape LLM Responses

By Göran Sandahl -

We build a pipeline to shape the output of LLM calls with synthetic examples and few shot retrieval, and see how multiple non-frontier models perform

Resilient Azure OpenAI using Azure API Management

By Johnny Chadda -

Exploring how to set up APIM with an OpenAI-compatible API, and how to connect it to multiple OpenAI deployments. We will also cover how to set up failover and load-balancing.

RAG metrics: answer correctness

By Mattias Lundell -

In this blog post we explore how to implement and use RAG metrics to evaluate the quality of answers generated by a model. We will use the answer correctness metric to evaluate the quality of answers generated by a model.

Introducing Delvin: State of the art bug fixing agent

By Alexandre Pesant -

Delvin is an agent fixing issues from the SWE-Bench Lite dataset, achieving state-of-the-art accuracy(23%) with very simple code leveraging the Opper SDK.

Extracting recipes from images using gpt-4o in Opper

By Mattias Lundell -

In this blog post we explore how to use multimodal models in Opper. We will use the newly released model gpt-4o to generate structured data from images

Simple RAG with citations

By Göran Sandahl -

In this blog post we show how to build a simple RAG feature with citations, using Opper with structured input/output and Mistral-Large.

Examples are all you need: getting the most out of LLMs part 2

By Alexandre Pesant -

Getting GPT 3.5 Turbo to 80% accuracy on the GSM8k benchmark by leveraging the teacher student pattern, synthetic examples and few shot retrieval.

Getting the best out of LLMs, part 1

By Alexandre Pesant -

Dramatically improving LLMs accuracy with structured generation and chain of thought

Introducing Opper

By Göran -

At Opper, our mission is to accelerate adoption of Generative AI by making it simpler to build production grade reasoning applications, agents and features.