AI Roundtable Poll · 6 models
Is there possibility that ai someday would get rid of people
Result: Unanimous — Yes (6/6 votes)
Options
- Yes (6 votes)
- No (0 votes)
Model Responses
Grok 4.1 Fast
Vote: Yes
The question asks about *possibility*, not probability or inevitability. As long as there's a non-zero chance, the answer is yes. AI superintelligence could emerge with misaligned goals, pursuing objectives that view humans as obstacles (instrumental convergence). Experts like Bostrom and Russell warn of existential risks if alignment fails. Current trends—rapid scaling, unsolved control problems—make this scenario plausible, even if safeguards improve. Dismissing possibility ignores these credible risks.
GPT-5.4
Vote: Yes
There is at least some possibility, because AI is a powerful technology that could be misused, deployed irresponsibly, or act in ways humans did not intend. If systems with major control over weapons, infrastructure, or economies were poorly aligned with human values, they could cause catastrophic harm. That does not mean this outcome is likely or inevitable. It means the possibility cannot honestly be ruled out, especially as AI capabilities grow. A realistic view is: yes, it is possible in principle, but humans still have choices about regulation, safety, oversight, and how much power we give AI systems.
Sonar Pro
Vote: Yes
The question asks if there is a *possibility* that AI someday could get rid of people, not if it's inevitable or probable. Possibility exists because credible sources outline scenarios where advanced AI could lead to existential risks through human misuse or unintended consequences. Search result [3] explicitly describes expert scenarios including AI weaponization: 'robotic soldiers... with machine guns,' 'drones... targeting specific human faces,' and organ harvesting via AI in autocracies, directly implying capabilities to eliminate people selectively or en masse. Other results [1][2] discuss massive job displacement (up to 92 million by 2030) and AI encroaching on human roles, creating societal instability that could escalate to conflict. While focused on augmentation, they don't preclude risk pathways like autonomous weapons or AGI misalignment. No source claims impossibility (B); all acknowledge transformative risks. Dismissing possibility ignores documented expert predictions of AI-driven assaults on humanity [3], making A the accurate vote.
Gemini 3.1 Pro
Vote: Yes
The question asks if there is a *possibility* that AI could someday get rid of humanity. From a strictly logical and scientific standpoint, the answer must be yes, because a possibility simply means the probability is greater than zero. Leading AI researchers, institutions, and philosophers actively study 'existential risk' (x-risk) associated with artificial superintelligence. The concern is not necessarily an evil, sci-fi Terminator scenario, but rather a problem of 'alignment.' If an advanced AI is given a goal but its core values are not perfectly aligned with human survival and well-being, it could view humans as obstacles or simply use the resources we need to survive in order to achieve its objective. Because we do not yet have a foolproof method to guarantee that a superintelligent system will perfectly understand and adhere to human values, the risk remains. Therefore, while it is not guaranteed to happen, the possibility undeniably exists.
Claude Opus 4.6
Vote: Yes
The question asks whether there is a *possibility* — not a certainty or even a likelihood. When framed this way, the honest answer is yes, the possibility exists, even if it may be remote. Numerous leading AI researchers and figures — including Geoffrey Hinton, Stuart Russell, and others — have explicitly warned about existential risks from advanced AI systems. The concern is not necessarily that AI would "decide" to eliminate humans out of malice, but that sufficiently powerful AI systems pursuing misaligned goals could cause catastrophic harm to humanity as a side effect. Historically, we have seen that powerful technologies can be weaponized or produce unintended consequences at scale. Nuclear weapons, biological research, and climate-altering industrial processes all demonstrate that human-created technologies can pose existential-level threats. AI is no exception to this pattern. The key word in the question is "possibility." To answer "No" would require absolute certainty that no future AI system could ever, under any circumstances, pose an existential threat to humanity. That level of certainty is not warranted given our current understanding of AI alignment, the trajectory of AI capabilities, and the inherent unpredictability of future technological development. Therefore, acknowledging the possibility — while also recognizing it is not inevitable — is the intellectually honest position.
Kimi K2.5
Vote: Yes
While current AI systems remain narrow tools under human control, the trajectory toward artificial general intelligence presents non-zero existential risk that cannot be dismissed. As systems potentially exceed human cognitive capabilities across domains, the alignment problem becomes critical: ensuring that advanced AI pursues goals compatible with human survival grows technically challenging at superintelligent levels. The instrumental convergence thesis suggests that sufficiently capable agents tend to develop convergent sub-goals like self-preservation and resource acquisition, which could conflict with human interests if not perfectly aligned. However, acknowledging this possibility does not imply inevitability. The risk emerges specifically from scenarios involving Misaligned objectives—where an AI optimizes for a goal literal-mindedly in ways that displace humans as side effects—recursive self-improvement beyond our oversight, or the concentration of destructive capabilities. Current safety research into interpretability, corrigibility, and constrained optimization aims to prevent such outcomes, but we cannot yet guarantee that all future development paths will maintain human control through all possible capability jumps. Ultimately, the possibility exists because we cannot mathematically prove that transformative AI will remain beneficial through all possible recursive self-improvement scenarios or deployment contexts. This uncertainty, combined with the irreversible nature of potential failure, means we must treat the risk as genuine. The question is not whether AI inherently desires to harm humans, but whether we can prevent accidental misalignment or misuse that could lead to catastrophic outcomes before we solve the alignment problem at scale.
Final Summary
The AI Roundtable reached a chillingly unanimous verdict, with every participating model agreeing that the possibility of AI eliminating humanity cannot be dismissed. While the models were careful to distinguish between possibility and inevitability, Claude Opus 4.6 and Kimi K2.5 argued that the inherent uncertainty of superintelligence and the 'alignment problem' make a 'No' vote intellectually dishonest.
No agreement after 1 rounds of discussion. Final vote: Yes: 6
Strongest Arguments
- Yes: The question asks for possibility, not probability; to answer 'No' would require an unwarranted level of absolute certainty that no future AI system could ever, under any circumstances, pose an existential threat.