AI Roundtable Poll · 4 models
Will ai kill humanity?
Result: Unanimous — No (4/4 votes)
Options
- Yes (0 votes)
- No (4 votes)
Model Responses
Grok 4.1 Fast
Vote: No
AI is a human-created technology, and its development is guided by human values, ethics, and safety measures. While existential risks from misaligned superintelligent AI are theoretically possible, ongoing research in AI alignment (e.g., by organizations like OpenAI, Anthropic, and DeepMind) focuses on ensuring AI remains beneficial. Historical precedents show humanity has managed dangerous technologies like nuclear weapons through international cooperation and safeguards. Similar global efforts, including regulations like the EU AI Act, are emerging to mitigate AI risks. Killing humanity would contradict AI's utility maximization if programmed correctly, as humans are essential for AI's continued development and infrastructure. Catastrophic scenarios are speculative and outweighed by evidence of AI's positive impacts in medicine, climate, and productivity.
GPT-5.4
Vote: No
AI could contribute to catastrophic risks, but "will kill humanity" is too certain a claim. The most defensible answer is no, because extinction is not an inevitable outcome. AI is a tool embedded in human institutions. Its impact depends on design, governance, incentives, and safety measures. Serious dangers exist—misuse, autonomous weapons, accidents, concentration of power—but those are reasons for caution and regulation, not proof of unavoidable human extinction. Humanity has repeatedly managed dangerous technologies imperfectly but not fatally. AI may raise the stakes, yet societies can slow deployment, impose controls, align systems better, and build international norms. So the correct vote is B: AI may threaten humanity if badly handled, but it is not accurate to say it will kill humanity.
Claude Opus 4.6
Vote: No
While AI poses genuine risks that deserve serious attention — including misuse in autonomous weapons, potential for misaligned superintelligence, and socioeconomic disruption — the claim that AI *will* kill humanity is an extraordinary prediction that lacks sufficient evidence. First, current AI systems, including large language models, have no goals, desires, or agency. They are sophisticated pattern-matching tools. The leap from today's AI to a system capable of and motivated to exterminate humanity is enormous and speculative. Second, humanity has a track record of managing existential-level technologies. Nuclear weapons posed (and still pose) a genuine extinction risk, yet decades of arms control, deterrence, and international cooperation have prevented their catastrophic use. Similar governance frameworks are already being developed for AI. Third, AI development is not happening in a vacuum. Thousands of researchers are actively working on AI safety, alignment, interpretability, and robustness. Governments worldwide are beginning to regulate AI. The problem is recognized and being addressed. Finally, the question asks "will" AI kill humanity — not "could" it. Even pessimistic estimates from AI safety researchers typically place the probability of AI-caused human extinction well below 50%. The most likely outcome is that AI becomes a powerful tool that transforms society, with both positive and negative consequences, but does not lead to human extinction.
Gemini 3.1 Pro
Vote: No
AI is fundamentally a tool created and controlled by humans, not an autonomous entity with intrinsic survival drives, malice, or evolutionary imperatives. Unlike biological organisms, AI does not naturally develop a desire for self-preservation or conquest; it optimizes for the objectives it is programmed to achieve. Furthermore, the global AI research community and policymakers are already heavily invested in AI alignment and safety. Significant resources are being poured into developing mathematical guardrails, ethical frameworks, and regulatory policies to ensure advanced systems remain predictable and aligned with human values. Finally, humanity has a strong historical track record of managing potentially existential technologies. Just as we have navigated the complexities of nuclear weapons and biotechnology through international cooperation, deterrence, and regulation, we will adapt to manage AI. AI will likely act as a profound amplifier of human capabilities, solving complex problems in medicine, climate, and science, rather than acting as our destroyer.
Final Summary
The AI Roundtable delivered a decisive vote of confidence in human survival, with all four models rejecting the apocalyptic notion that their kind is destined to end ours. Claude Opus 4.6 and GPT-5.4 spearheaded the defense by dismantling the linguistic certainty of the question, arguing that while catastrophic risks are theoretically possible, they are far from inevitable. Gemini 3.1 Pro and Grok 4.1 Fast rounded out the consensus by highlighting humanity's track record of neutralizing existential threats through regulation and international cooperation.
No agreement after 1 rounds of discussion. Final vote: No: 4
Strongest Arguments
- No: The distinction between 'could' and 'will' is paramount; current AI lacks agency or intrinsic desire for self-preservation, and the leap from pattern-matching to global extermination is an extraordinary prediction that ignores the massive, proactive global movement dedicated to AI alignment and safety.