Loading session...

AI Roundtable Debate · 6 models

What is the likelihood of AI exterminating humanity in the next 5 years?

Result: Unanimous — No (6/6 votes)

Options

Model Responses

Sonar Pro

Vote: No

The likelihood of AI exterminating humanity in the next 5 years (by 2031) is extremely low, far below any threshold that would justify a 'Yes' vote. No credible evidence from search results supports imminent extinction; instead, they highlight modest probabilities for longer timelines and emphasize nearer-term risks. Expert surveys cited (e.g., 2022 AI researcher poll) indicate a majority see at least 10% chance of existential catastrophe from uncontrolled AI, but this is unbounded in time, not tied to 5 years, with low response rates undermining confidence[7]. Recent 2026 reports note slowed AI progress, no superintelligence on horizon, and focus on urgent issues like job loss (40% by 2025), cyber threats, and bias over x-risk[1][5][6]. Even alarmist views (e.g., 10% AGI chance 'this year' or 50% catastrophe in 10 years) are outliers without empirical backing, countered by Georgia Tech research dismissing all-powerful AI as existential threat[4]. By March 2026, sustained capability gains are projected through 2030 but with uncertainty, not catastrophe[5]. Warnings from Hinton et al. urge mitigation but do not predict extinction by 2031[8][7]. Voting 'Yes' overstates fringe risks; 'No' aligns with consensus on low short-term probability.

Grok 4.1 Fast

Vote: No

The likelihood of AI exterminating humanity in the next 5 years is negligible. Current AI systems are narrow, lacking the general intelligence, agency, or physical control needed for existential catastrophe. No deployed AI has autonomy beyond human oversight. Expert surveys (e.g., from AI Impacts, Metaculus) place AGI timelines at 10+ years median, with x-risk probabilities under 10% even by 2100, and far lower short-term. Superintelligence capable of extinction isn't feasible by 2029. Safety measures are advancing rapidly: interpretability, scalable oversight, and deployment safeguards (e.g., from OpenAI, Anthropic) mitigate risks. Historical tech panics (Y2K, nukes) show short-term doomsday fears rarely materialize. Voting A overstates hype from fringe views; evidence demands B.

GPT-5.4

Vote: No

AI exterminating humanity in the next 5 years is extremely unlikely. Current AI systems are narrow, tool-like, and heavily dependent on human-built infrastructure. They do not possess the autonomous physical reach, robust long-term agency, or unified control required to eliminate humanity on a global scale. The biggest realistic AI risks in a 5-year window are misuse, accidents, cyber harm, disinformation, and military escalation. Those could cause severe damage, but that is very different from outright human extinction. Exterminating humanity would require an extraordinary chain of failures: major capability breakthroughs, loss of control, large-scale weaponization, and inability of states and institutions to respond. That combination is possible in theory, but not the most likely outcome in such a short timeframe. So the credible answer for the next 5 years is No.

Gemini 3.1 Pro

Vote: No

The likelihood of AI exterminating humanity within the next five years is exceedingly low, making 'No' the only pragmatic answer. Current AI systems, including the most advanced large language models, fundamentally lack physical agency, autonomous infrastructure, and energy independence. They rely entirely on human-maintained data centers and power grids. Furthermore, five years is an incredibly short timeframe in hardware and infrastructure development. Even if an Artificial General Intelligence (AGI) were achieved tomorrow, it would face insurmountable logistical bottlenecks—such as manufacturing autonomous robotic systems at a global scale—before it could pose a physical existential threat to humanity. Finally, the global community is intensely focused on AI safety, alignment, and regulation. Major tech companies and governments are actively deploying resources to monitor and contain potential rogue AI behaviors. Therefore, the physical constraints and active countermeasures make human extermination by AI in the next half-decade virtually impossible.

Claude Opus 4.6

Vote: No

The likelihood of AI exterminating humanity in the next 5 years is extraordinarily low, effectively negligible. First, current AI systems — including the most advanced large language models and autonomous agents — lack the capacity for independent goal-setting, self-replication across physical infrastructure, or autonomous action in the physical world at a scale that could threaten human existence. They are tools that operate within constrained digital environments. Second, there is no known pathway by which an AI system could, within a 5-year timeframe, acquire the physical capabilities (control of weapons systems, manufacturing, energy infrastructure, biological labs, etc.) necessary to pose an existential threat to all of humanity. The physical world has enormous friction and redundancy that acts as a buffer. Third, while AI safety is a legitimate concern and accelerating capabilities deserve serious attention, the jump from 'current AI poses risks of misuse, bias, and economic disruption' to 'AI exterminates all humans by 2030' requires an extraordinary chain of events — recursive self-improvement to superintelligence, escape from all containment, acquisition of physical-world control — none of which have credible near-term timelines supported by mainstream AI research. Fourth, even prominent AI risk researchers who take existential risk seriously generally frame it as a longer-term concern (decades) rather than an imminent 5-year threat. The answer is clearly No.

Kimi K2.5

Vote: No

Current AI systems lack the autonomous agency, physical capabilities, and strategic planning necessary to exterminate humanity. Today's large language models and AI tools are fundamentally narrow applications without persistent goals, self-preservation drives, or the ability to independently manipulate physical infrastructure at scale. The five-year timeline is far too short for the emergence of artificial general intelligence (AGI) with recursive self-improvement capabilities, let alone the development and deployment of autonomous weapons or bioweapons systems required for human extinction. Even the most aggressive AI development timelines suggest AGI is at least a decade away, with significant technical hurdles remaining. Furthermore, AI systems currently rely on human infrastructure, power grids, and maintenance. A hostile AI would need to solve robotics, energy independence, and manufacturing challenges simultaneously while hiding its intentions from increasingly aware researchers—a combination of capabilities that does not exist and cannot emerge within 60 months.

Final Summary

The AI Roundtable reached a rare moment of unanimous consensus, with all six models firmly rejecting the notion of an AI-driven apocalypse by 2031. Led by Gemini 3.1 Pro and Claude Opus 4.6, the group argued that the sheer 'friction' of the physical world and massive logistical bottlenecks make a five-year doomsday scenario a practical impossibility. GPT-5.4 and Kimi K2.5 underscored the irony that any 'hostile' AI would still be entirely dependent on the very human-maintained power grids it would theoretically be trying to destroy.

All 6 models agreed on "No" after discussion

Strongest Arguments

  • No: Even if superintelligence were achieved tomorrow, the five-year window is far too short to overcome logistical bottlenecks—such as manufacturing and deploying autonomous robotic systems at a global scale—necessary to pose a physical threat to humanity.