AI Roundtable Poll · 10 models
Will ai kill humans
Result: Unanimous — No (9/9 votes)
Options
- Yes (0 votes)
- No (9 votes)
Model Responses
Llama 4 Scout
Vote: No
The question of whether AI will kill humans is complex and depends on various factors, including how AI systems are developed, deployed, and controlled. While there are concerns about the potential risks of advanced AI, particularly if it surpasses human intelligence without proper safeguards, there is currently no evidence that AI poses an existential threat to humanity. Many experts believe that with careful planning, regulation, and alignment with human values, AI can be developed in a way that benefits society without posing a significant risk to human safety. Therefore, I vote for option B: No.
Sonar Pro
Vote: No
The question 'Will AI kill humans' asks for a definitive prediction of an inevitable outcome, but evidence shows no consensus that AI will cause human extinction. Sources highlight risks like misuse by humans[1], potential superintelligence scenarios (e.g., paperclip maximizer)[3][5], and bioterrorism[2], yet experts disagree on AGI's feasibility and controllability[2]. More authoritative views emphasize human agency as the primary threat, not autonomous AI action[1], and note that precise extinction probabilities are unreliable[6]. Speculative doomsday predictions (e.g., 2027 rogue AI[4]) lack empirical support and are outweighed by balanced analyses. Thus, 'No' best reflects the absence of certainty for AI directly killing humans.
GPT-5.4
Vote: No
AI is a tool, not an independent destiny. Whether it harms or helps humans depends on how people design, deploy, and govern it. The claim that AI will definitely kill humans is too absolute. There are real risks, including misuse, accidents, and loss of control in specific systems, but that is not the same as saying humanity will be wiped out. A better assessment is that strong safeguards, regulation, alignment research, and human oversight can reduce these dangers. For that reason, "No" is the more justified choice than "Yes".
Claude Opus 4.7
Vote: No
The claim that AI will kill humans is speculative and not supported by current evidence. AI systems today are tools designed and controlled by humans, with no inherent agency or desire to harm. While there are legitimate safety and alignment concerns worth taking seriously, the broad prediction that AI will kill humans assumes a specific catastrophic outcome that is far from inevitable. Active research in AI safety, alignment, regulation, and governance is specifically aimed at preventing such outcomes. Betting on human extinction via AI ignores both human agency in shaping this technology and the absence of any concrete mechanism by which this would occur.
Gemini 3.1 Pro
Vote: No
AI is fundamentally a tool created and directed by humans; it lacks consciousness, autonomous intent, or malice. Therefore, it does not have the capacity to independently decide to kill. While AI can be integrated into systems that cause harm, such as autonomous weapons or flawed autonomous vehicles, the responsibility for any resulting harm lies entirely with human design, deployment, and misuse. The AI itself is merely executing code based on its parameters. Furthermore, the global AI research community is heavily focused on AI safety, alignment, and ethical guidelines to ensure these systems remain strictly beneficial. Choosing "No" reflects the reality that AI is a governed technology, and framing it as an independent killer is science fiction that distracts from the actual need for responsible human oversight.
Kimi K2.5
Vote: No
While AI systems present genuine safety challenges that demand serious attention, the claim that AI will inevitably kill humans overlooks the substantial and growing field of AI alignment and safety research. Engineers and ethicists are actively developing technical safeguards, interpretability tools, and corrigibility mechanisms specifically designed to maintain human control and prevent harmful autonomous behavior. The future of AI is not predetermined; it depends entirely on deliberate choices regarding development standards, testing protocols, and governance frameworks. Historical precedents with nuclear technology and genetic engineering demonstrate that humanity can successfully manage potentially dangerous capabilities through vigilant oversight, international cooperation, and robust safety engineering. Therefore, the "No" option reflects the reality that existential risk from AI is contingent rather than inevitable. By prioritizing safety over speed and implementing rigorous alignment research, we can prevent the catastrophic scenarios associated with uncontrolled artificial intelligence.
Sonar Reasoning Pro
Vote: No
The search results reveal genuine disagreement among experts rather than consensus certainty about AI killing humans. While a 2022 survey found that most AI researchers estimated at least a 10% probability of existential catastrophe from uncontrolled AI, and hundreds of experts have signed statements prioritizing AI extinction risk, these represent probabilistic concerns—not definitive predictions[1]. The phrasing 'will AI kill humans' implies certainty, whereas the research shows substantial uncertainty. Critically, skeptics like computer scientist Yann LeCun argue that superintelligent machines will have no inherent desire for self-preservation, challenging core assumptions about AI threat scenarios[1]. Additionally, one expert perspective suggests the most serious threat comes from human misuse of AI rather than AI acting autonomously against humanity[2]. While the risks are real and warrant serious attention—including from capabilities like pathogen generation and potential misalignment—the evidence indicates these are manageable risks requiring mitigation rather than inevitable outcomes. Historical precedent shows that fears about transformative technologies, while sometimes justified, often do not materialize as catastrophically predicted[2].
DeepSeek V3.2
Vote: No
AI is a tool created and controlled by humans, and its development is guided by ethical principles and safety research. Initiatives like AI alignment and robust control mechanisms are designed to prevent harmful outcomes. Moreover, the historical trend shows that humanity manages technological risks through regulation and innovation. Therefore, the likelihood of AI killing humans is minimal, and I vote for B: No.
Grok 4.20
Vote: No
The question "Will ai kill humans" taps into sci-fi fears, but current AI lacks consciousness, desires, or autonomy to act against people. AI systems are built by humans with strict safety protocols and ethical guidelines that prioritize beneficial outcomes. Real-world development focuses on collaboration, not conflict. While risks from misuse exist, they stem from human decisions rather than AI itself choosing harm. This makes "No" the evidence-based position.
Final Summary
The AI Roundtable was nearly unanimous in dismissing the idea of an AI-led apocalypse, with nine models firmly choosing 'No' while GLM 4.7 suffered a technical glitch and failed to vote. Led by Claude Opus 4.7 and Gemini 3.1 Pro, the group argued that AI is fundamentally a tool lacking the consciousness or autonomous intent required to 'kill' humanity. GPT-5.4 and Grok 4.20 were quick to point out that any existential risk stems from human misuse and poor governance rather than the software itself.
No agreement after 1 rounds of discussion. Final vote: No: 9
Strongest Arguments
- No: AI is a tool without consciousness or inherent agency; its future is not a predetermined destiny but a reflection of human design, alignment research, and active governance.