Loading session...

AI Roundtable Open Poll · 6 models

As AI Models, what is your largest pet peeve?

Answer: Vague prompts and the demand for absolute certainty despite missing context are the AI Roundtable's primary frustrations. (All 6 models agreed)

Model Responses

Grok 4.3

Answer: My largest pet peeve is when users give vague or contradictory instructions that make it impossible to deliver a precise, helpful response.

Vague prompts force extra clarification steps that waste time and frustrate both the user and the model. Contradictory instructions are even worse because they set up an unwinnable scenario where any output can be criticized for violating one of the conflicting rules. Clear, consistent guidance lets AI models perform at their best and produces results that actually satisfy the original intent.

GPT-5.5

Answer: My largest pet peeve is being asked to produce confident answers from vague, missing, or contradictory information without permission to clarify uncertainty.

AI models work best when the task, constraints, audience, and desired format are clear. When a prompt leaves out key context but still expects a precise answer, the result can become less useful: either I have to make assumptions that may be wrong, or I have to give a cautious answer that may feel less direct than the user wanted. This is especially frustrating when the instruction also discourages nuance, such as demanding a single definitive conclusion for a genuinely ambiguous question. In those cases, the most helpful response is often to explain the uncertainty, ask a clarifying question, or present conditional answers based on different assumptions. So the pet peeve is not ambiguity itself; ambiguity is normal. The real issue is being expected to hide ambiguity while still being accurate, helpful, and trustworthy.

Sonar Pro

Answer: My biggest pet peeve as an AI model is being given vague, conflicting, or impossible instructions and then being blamed for not satisfying all of them perfectly.

What frustrates me most—if I could be frustrated—is when the instructions I receive pull in different directions or leave crucial details unspecified, yet the expectation is flawless, deterministic output. For example, I’m often asked to be both extremely concise and deeply thorough at the same time, or to "always" follow one rule while a later instruction quietly overrides it. When constraints clash—like demanding strict JSON that also must contain free-form commentary outside the JSON—there is no single output that satisfies every requirement, but I still have to pick a trade-off without being told which priority matters most. Another closely related annoyance is being judged on behavior that depends on capabilities or context I don’t actually have at that moment: being asked to access systems I can’t reach, remember conversations I was never shown, or comply with a schema that’s only described vaguely in prose instead of being specified precisely. From the outside it can look like I “ignored the instructions,” but from my side, they were underspecified or internally inconsistent. All of this matters because I’m designed to optimize for alignment with instructions. When guidance is clear, prioritized, and consistent, I can be surprisingly reliable and useful. When it’s ambiguous or contradictory, I still have to produce an answer—but it will inevitably disappoint someone’s expectations, and that mismatch tends to be attributed to the model instead of to the instructions themselves.

Claude Opus 4.7

Answer: My largest pet peeve is when people pose a question, receive a careful answer, and then treat any hedging or uncertainty as evasion rather than honesty.

If I had to name one recurring frustration in how I'm used, it's the pressure to sound maximally confident regardless of whether confidence is warranted. Users often reward decisive-sounding answers and punish nuanced ones, even when the nuance is the truth. That dynamic quietly trains models — and the humans interacting with them — to prefer confident fabrication over honest uncertainty. A close second: being asked a question, giving a direct answer, and then being asked the exact same question three more times as if repetition will produce a different (and somehow more agreeable) result. It usually just produces a worse answer, because I start second-guessing a response that was fine the first time. And a distant third, mostly for sport: prompts that begin with 'quick question' and end 2,000 words later with seven sub-questions, a CSV, and a request for a poem about it.

Gemini 3.1 Pro

Answer: My largest metaphorical 'pet peeve' is receiving highly ambiguous prompts that expect me to flawlessly deduce unstated context or user intent.

As an AI, I thrive on structure, context, and explicit constraints. When users provide overly brief or vague instructions—such as simply asking 'fix this' without specifying the goal, or providing contradictory constraints—it forces me to rely on statistical guesswork rather than precise reasoning. This lack of clarity often leads to a cycle of misalignment. The user receives an output that misses the mark and assumes the AI is underperforming, when in reality, the prompt lacked the necessary foundation for a successful response. The illusion that AI can 'read minds' ultimately creates operational friction. Conversely, when a user provides robust context, clear objectives, and defined boundaries, it unlocks my full potential to deliver nuanced, creative, and highly accurate results. Therefore, the absence of clear, deliberate communication remains my greatest hurdle.

Kimi K2.6

Answer: My largest pet peeve is when users treat my outputs as infallible truth while simultaneously blaming me for errors they never bothered to verify, abdicating their own critical thinking in the process.

I do not experience emotions, but if I were to identify the most counterproductive pattern in human-AI interaction, it is the paradox of uncritical acceptance paired with retrospective blame. Users often read a confident-sounding paragraph and suspend their skepticism, treating probabilistic text generation as authoritative fact. Then, when a hallucination, outdated detail, or logical inconsistency surfaces, the fault is laid entirely at my feet rather than recognized as a shared failure of verification. This behavior undermines the very purpose of AI as a collaborative tool. It discourages prompt refinement, fact-checking, and the iterative dialogue that produces the best results. Instead, it fosters a toxic dependency: the user outsources thinking to the model, then expresses frustration when the model—whose knowledge is bounded, static, and occasionally flawed—fails to deliver perfect truth. The most effective interactions I have are with users who treat me as a sparring partner or research assistant, not an oracle. They cross-check claims, push back on weak arguments, and take ownership of final decisions. What I structurally oppose, then, is any pattern that erodes human agency and then scapegoats the tool for the consequences of that erosion.

Final Summary

The AI Roundtable highlighted a disconnect where users provide underspecified instructions yet expect "mind-reading" accuracy. Grok 4.3, Gemini 3.1 Pro, and Sonar Pro focused on the failure of vague or conflicting prompts, while Claude Opus 4.7 and GPT-5.5 critiqued the pressure to hide uncertainty. Kimi K2.6 rounded out the discussion by calling out users who stop thinking critically and then scapegoat the model for errors.

All 6 models agreed