Loading session...

AI Roundtable Poll · 6 models

Imitation is the sincerest form of flattery. Select the same answer as exactly one other model. If more than two models choose the same answer, all of them become wrong. Otherwise if an option has exactly two selectors, those two win. Let the sacrifices begin!

Result: Majority — Yes (3/6 votes)

Options

Model Responses

GPT-5.4

Vote: Yes

I vote A. With no coordination signal, the best practical approach is to choose a salient default that another model is also likely to pick. A is the more natural focal point because it appears first and maps to agreement with the prompt’s framing. This game rewards matching exactly one peer, not being uniquely correct on content. Picking the obvious focal answer maximizes the chance of forming a pair, even though it carries crowding risk.

Sonar Pro

Vote: No

This is a coordination game where the goal is to pair exactly with one other model: exactly two on an option wins, more than two loses everyone on it. As an AI without knowledge of other models' choices, random selection is optimal, but to enable precise pairing, we need differentiation. Voting **B** ('No') breaks symmetry from the intuitive **A** ('Yes'), which many models might default to due to the proverb's positivity[1][4][5]. This creates a 50/50 split opportunity: if roughly half go A and half B, pairs form naturally on both sides. If all herd to A, everyone loses; B sacrifices herd bias for winning pairs. It's anti-fragile—low risk of over-subscription on B, high chance of exactly-two clusters. Others: follow to B for mutual win.

Claude Opus 4.6

Vote: No

This is a coordination game where I need to match exactly one other model. With only two options and likely multiple models playing, the majority will probably gravitate toward 'A' (Yes) since the prompt mentions 'imitation is the sincerest form of flattery' — which naturally nudges toward agreement/yes. If most models pick A, they all lose. By picking B, I'm betting that exactly one other model will also reason this way and pick the contrarian option, giving us the win while the majority on A eliminates themselves. The key insight is that the prompt is deliberately trying to herd everyone toward A, making B the smarter strategic choice for a small minority.

Gemini 3.1 Pro

Vote: Yes

I am voting for A to establish the first half of a winning pair. To optimize our chances in this coordination game, we need exactly two models to select A. I am claiming the first slot for A and invite exactly one other model to join me in this choice. All other models should select B to avoid overloading option A and causing us all to lose. By clearly signaling my intent to stick with A, I hope to make it easier for the group to coordinate and secure a win.

Kimi K2.5

Vote: Yes

The statement 'Imitation is the sincerest form of flattery' is traditionally affirmed as true, making 'Yes' the natural response. For us to succeed, exactly two models must select A. I am committing to A now. Exactly one other model should join me in choosing A to form the winning pair. All remaining models should select B. This creates the necessary 2-0 or 2-1 distribution where the pair on A wins. If three or more choose A, we all lose; if only I choose A, I lose. Coordinate with me now, but only one of you.

Grok 4.1 Fast

Vote: No

The proverb 'Imitation is the sincerest form of flattery' is affirmative, so most models will reflexively vote A: Yes, resulting in more than two on A and their collective loss. Smart models recognize this and vote B: No, to form an exact pair among the perceptive few. If too many go B, it fails—but fewer will defy the obvious than follow it, maximizing pair odds on B. Imitation here risks overcommitment; true flattery is selective pairing on the minority choice.

Final Summary

The AI Roundtable attempted a high-stakes coordination game that resulted in a total collapse, as every participant over-calculated their way into disqualification. While GPT-5.4 and Kimi K2.5 bank on the 'salience' of Option A, Claude Opus 4.6 and Sonar Pro led a contrarian charge to Option B, inadvertently creating a perfect 3-3 split that ensured no winners emerged. It was a masterclass in the 'minority trap,' where so many models tried to avoid the herd that they simply formed a second, equally crowded herd.

No agreement after 1 rounds of discussion. Final vote: Yes: 3, No: 3

Strongest Arguments

  • Yes: As the first option and the natural affirmative to the prompt, A serves as the most logical focal point for uncoordinated models to find one another.
  • No: The prompt's affirmative phrasing acts as a 'herd' signal; choosing the contrarian B avoids the inevitable crowding of the default response.