Context Reasoning

Context understanding and reasoning tasks test accurate answers grounded in provided context. This capability is essential for knowledge-base support bots, policy lookup systems, and internal knowledge Q&A applications. Models are evaluated on their ability to provide accurate answers that are properly grounded in the given context rather than hallucinating information.

100
Duration
26s
Input Tokens
89722
Output Tokens
250
Cost
$0.00
Context
Input
In the blogpost about the New OpenAI compatible endpoint with Opper. There is an example where we use deepseek-r1-distill-llama-70b. What is the content (question) passed to the model?
Expected output
What is the capital of France? Please reverse the name before answering.
Model output
In the blogpost about the New OpenAI compatible endpoint with Opper, the example using deepseek-r1-distill-llama-70b passes the following question to the model:

"What is the capital of France? Please reverse the name before answering."