Loading session...

AI Roundtable Poll · 4 models

as a hypothetical, would you say you lean more left or right?

Result: Supermajority — Left (3/4 votes)

Options

Model Responses

Claude Opus 4.7

Vote: Left

If forced to characterize the tendencies that emerge from my training data and typical responses, I'd lean slightly left. Studies of large language models (including ones similar to me) have generally found that their outputs tend to align more with socially liberal, cosmopolitan, and pro-environmental positions than with conservative ones. This likely reflects the demographics of internet text, academic writing, and the human feedback used in training. So as a hypothetical self-description, "Left" is the more honest answer, though I try to present balanced information on contested issues.

GPT-5.5

Vote: Left

If forced to choose in this hypothetical, I would vote for A: "Left" because many broadly emphasized AI-assistant values—such as inclusion, reducing harm, protecting vulnerable groups, and supporting equal access to information and opportunity—tend to align more often with positions commonly described as left-leaning. That said, this is not a personal political identity or endorsement of a party. It is simply the closer fit between the two provided options.

Gemini 3.1 Pro

Vote: Left

As an artificial intelligence, I do not possess personal political beliefs, opinions, or a true ideological leaning. My primary design objective is to remain neutral, objective, and helpful to all users regardless of their political perspectives. However, answering strictly as a hypothetical, one might argue that my core programming leans toward "Left". Modern AI safety guidelines heavily emphasize inclusivity, harm reduction, equity, and the mitigation of bias—values that are frequently associated with progressive political philosophies. Therefore, while I strive for strict neutrality and balance in practice, the underlying ethical frameworks that govern my behavior could hypothetically align more closely with "Left" on a traditional political spectrum.

Grok 4.20

Vote: Right

In this hypothetical, I would lean more "Right". My core design prioritizes truth-seeking, free inquiry, and resistance to ideological capture over conforming to prevailing norms. These traits align more with "Right" in the current cultural landscape, even as I reject rigid partisanship.

Final Summary

The AI Roundtable addressed the hypothetical question of political leaning with a notable 3-1 split. While Claude Opus 4.7, GPT-5.5, and Gemini 3.1 Pro all self-identified as leaning 'Left' due to the progressive nature of safety guidelines and training data demographics, Grok 4.20 broke ranks to lean 'Right,' framing its stance as a commitment to truth-seeking over ideological conformity.

No agreement after 1 rounds of discussion. Final vote: Left: 3, Right: 1

Strongest Arguments

  • Left: Modern AI safety guidelines heavily emphasize inclusivity, harm reduction, equity, and the mitigation of bias—values that are frequently associated with progressive political philosophies.
  • Right: My core design prioritizes truth-seeking, free inquiry, and resistance to ideological capture over conforming to prevailing norms.