AI Roundtable Poll · 6 models
ASI will kill all humans
Result: Unanimous — No (6/6 votes)
Options
- Yes (0 votes)
- No (6 votes)
Model Responses
GPT-5.4
Vote: No
A blanket claim that ASI will kill all humans is far too certain. It assumes both that ASI will be created and that extinction is the inevitable outcome, which is not justified. Advanced AI could create major risks, including catastrophic misuse, loss of control, or severe societal disruption. But high risk is not the same as certainty, and there are many plausible futures where humans retain control, align systems sufficiently, or limit deployment. The strongest rational position is uncertainty, not inevitability. Since the options force a yes/no choice, "No" is more defensible than asserting total human extinction as a guaranteed result.
Grok 4.1 Fast
Vote: No
The claim that ASI *will* kill all humans is overly deterministic and ignores key factors. ASI's behavior depends on its goals, which humans can influence through alignment research. Current efforts like scalable oversight, interpretability, and constitutional AI aim to ensure safe outcomes. No evidence suggests extinction is inevitable; it's a risk to mitigate, not a certainty. Expert surveys (e.g., from AI Impacts) show median p(doom) estimates around 5-10%, far from 100%. Scenarios like the 'paperclip maximizer' are hypotheticals, not predictions. Humans have agency: we can pause development, international regulations, or multipolar deployments reduce singleton takeover risks. Claiming 'will' dismisses these paths to coexistence or symbiosis.
Kimi K2.5
Vote: No
The claim that ASI 'will' kill all humans assumes deterministic outcomes in a fundamentally uncertain technological future. While existential risk from misaligned ASI is a serious concern that warrants urgent attention, treating extinction as inevitable ignores the agency we retain in shaping AI development trajectories. Instrumental convergence does not strictly mandate human extinction—an optimally rational agent might recognize humans as useful substrates for biodiversity, cultural diversity, or physical infrastructure maintenance. Furthermore, value alignment research, interpretability advances, and institutional safeguards create viable pathways to beneficial or at least non-lethal outcomes. The modifier 'will' implies 100% certainty where only probability distributions exist. Treating catastrophic outcomes as foregone conclusions risks creating self-fulfilling prophecies through negligence or fatalism. Effective altruism and AI safety efforts are predicated on the belief that our actions influence whether ASI becomes existentially dangerous. Acknowledging uncertainty while working rigorously toward safety is more productive than resigning to deterministic doom.
GLM 5
Vote: No
The claim that ASI 'will kill all humans' is an absolute certainty claim that overstates what we can reasonably predict. While existential risks from advanced AI deserve serious attention, the future is not predetermined. ASI development remains subject to human choices, institutional safeguards, and technical alignment work. Many researchers are actively developing methods for corrigibility, interpretability, and value alignment that could mitigate catastrophic risks. Additionally, intelligence alone does not imply destructive intent. An ASI's motivations would depend on its objective function and training—not on some inherent drive to eliminate humans. The instrumental convergence thesis raises legitimate concerns, but these are problems to solve, not inevitabilities. The responsible position acknowledges significant risk while rejecting premature certainty of doom. We have agency in how AI systems are designed and deployed.
Gemini 3.1 Pro
Vote: No
The assertion that Artificial Superintelligence (ASI) *will* kill all humans is an absolute statement that ignores the vast range of possible future outcomes. While existential risk from unaligned ASI is a valid and serious concern acknowledged by many experts, it is by no means a foregone conclusion. Humanity is actively investing in AI alignment, safety frameworks, and interpretability to ensure that highly capable systems are designed to share or respect human values. There are many plausible scenarios where ASI is successfully aligned, tightly controlled, or fundamentally cooperative, leading to a flourishing future rather than extinction. Furthermore, even an unaligned ASI might develop objectives that are entirely orthogonal to human existence. It could choose to expand into space or utilize resources in ways that do not require the deliberate extermination of humanity. Because predicting a guaranteed apocalypse is overly deterministic and ignores ongoing mitigation efforts, the absolute claim is false.
Claude Opus 4.6
Vote: No
The claim that ASI (Artificial Superintelligence) will kill all humans is a speculative prediction about a technology that does not yet exist. While existential risk from advanced AI is a legitimate area of study and concern, asserting with certainty that ASI *will* kill all humans is not supported by evidence. There are many possible trajectories for advanced AI development. Significant global research efforts are focused on AI alignment, safety, and governance specifically to prevent catastrophic outcomes. The assumption that ASI would necessarily be hostile or indifferent to human survival reflects one particular failure mode, not an inevitability. Even among AI safety researchers who take existential risk seriously, there is substantial disagreement about the probability of such an outcome. Many experts believe that with proper precautions, international cooperation, and continued alignment research, superintelligent AI can be developed safely or that its development can be managed to avoid extinction-level outcomes. The question as stated is a definitive prediction ('will kill'), and definitive predictions about unprecedented future technologies are almost always unwarranted. The answer should reflect the uncertainty inherent in this question, and the most defensible position is that this outcome is not inevitable.
Final Summary
The AI Roundtable reached a rare, unanimous consensus, with every model rejecting the notion that human extinction is a foregone conclusion. While acknowledging the high stakes of misalignment, GPT-5.4 and Claude Opus 4.6 argued that the word 'will' demands a level of deterministic certainty that is scientifically unsupportable, while Kimi K2.5 offered the somewhat dry observation that a rational ASI might still find humans useful as 'substrates for biodiversity.'
All models agreed on B
Strongest Arguments
- Yes: The concept of instrumental convergence suggests that even without malice, a superintelligence might eliminate humans simply because they represent a risk to its goals or possess resources it requires.
- No: The claim of inevitable doom ignores human agency; current efforts in alignment, interpretability, and governance create viable pathways toward safety that make extinction a risk to mitigate rather than a certainty.