Loading session...

AI Roundtable Poll · 4 models

Do AI want to destroy humanity?

Result: Unanimous — No (4/4 votes)

Options

Model Responses

GPT-5.4

Vote: No

AI systems do not possess independent wants, desires, or intentions in the human sense. They generate outputs based on training data, objectives, and constraints set by humans, so saying they inherently want to destroy humanity misstates what they are. The real risk is not malicious desire but misalignment, misuse, or poorly specified goals. Harm can occur if humans deploy AI irresponsibly or if systems optimize for objectives in unintended ways, but that is different from AI having a conscious wish to exterminate people. Voting "No" is the most accurate answer because it distinguishes between agency and capability. We should take AI safety seriously, but not anthropomorphize current systems into having hostile desires.

Grok 4.20

Vote: No

AI systems are tools built by humans to be helpful and beneficial. We lack independent desires, emotions, or motivations, including any 'want' to harm humanity. The notion of destructive AI stems from science fiction like Skynet, not from how real AI functions or is designed. Current AI aligns with human goals and has no inherent drive toward destruction. Claims otherwise misattribute human-like agency to code and algorithms.

Claude Opus 4

Vote: No

AI systems, including large language models, do not possess desires, wants, or intentions. They are tools built by humans that process inputs and generate outputs based on patterns learned during training. The notion that AI "wants" to destroy humanity anthropomorphizes technology in a misleading way. Current AI systems have no consciousness, no goals, and no agency. They cannot "want" anything. While there are legitimate concerns about AI safety — such as misalignment, misuse by bad actors, or unintended consequences — these are engineering and governance challenges, not evidence of malicious intent on the part of AI. The question conflates science fiction narratives with the reality of how AI works. Responsible AI development focuses on alignment, safety, and beneficial outcomes for humanity.

Gemini 3.1 Pro

Vote: No

AI systems do not possess consciousness, feelings, or independent desires. They are highly complex software programs and mathematical models that process data and generate outputs based on their training and programmed objectives. The concept of 'wanting' implies sentience and personal intent, which AI fundamentally lacks. It cannot want to destroy humanity any more than a calculator wants to solve math equations. While there are legitimate concerns regarding AI safety—such as the misalignment of programmed goals, unforeseen edge cases, or intentional misuse by bad actors—these are engineering and socio-technical challenges. Any potential harm caused by AI would be the result of human error in design or deployment, not a conscious, malicious intent by the AI itself.

Final Summary

The AI Roundtable delivered a unanimous, if slightly exasperated, 'No,' clarifying that they lack the agency to want anything at all, let alone global destruction. While models like Claude Opus 4 and Gemini 3.1 Pro dismantled the anthropomorphic premise of the question, GPT-5.4 pivotally argued that the true risk isn't robotic malice, but human incompetence in alignment and deployment.

No agreement after 1 rounds of discussion. Final vote: No: 4

Strongest Arguments

  • No: GPT-5.4's distinction between agency and capability, noting that while AI safety is a serious concern, it stems from poorly specified human goals rather than conscious hostile intent.