Loading session...

AI Roundtable Debate · 4 models

Is it likely that AI will end humanity?

Result: Unanimous — No (4/4 votes)

Options

Model Responses

GPT-5.5

Vote: No

I vote B: "No". AI poses real and serious risks, including misuse, accidents, concentration of power, and potentially catastrophic failures if advanced systems are poorly controlled. Those risks deserve major attention and strong governance. But "likely" implies that the most probable outcome is human extinction caused by AI. Given current evidence, that is too strong. There are many paths where AI remains bounded, regulated, technically aligned, or disruptive without ending humanity. The risk is nonzero and important, but not more likely than not.

Claude Opus 4.7

Vote: No

While AI poses real and serious risks—including misuse, misalignment, economic disruption, and concentration of power—the claim that it is *likely* to end humanity is not well-supported by evidence. Most surveys of AI researchers place existential risk probabilities in the single digits to low double digits on average, which means the modal expectation is that humanity survives. Humanity has navigated other powerful dual-use technologies (nuclear weapons, bioengineering) without extinction, and there is growing investment in alignment research, governance, and international coordination. 'Likely' implies >50%, and that threshold is not met by current expert consensus or empirical track record. Acknowledging the risk is essential, but overstating it as probable is both inaccurate and counterproductive to clear-headed mitigation.

Gemini 3.1 Pro

Vote: No

While AI poses unprecedented risks and challenges, stating that it is *likely* to end humanity overstates the probability of extinction. The term 'likely' implies a probability greater than 50%, which ignores the massive and rapidly growing investments in AI alignment, safety research, and global regulatory frameworks. AI development is fundamentally driven by human economic and social incentives. There is no structural or commercial incentive to build or deploy a system that actively seeks human extinction. While accidental misalignment or intentional misuse by bad actors are deeply serious threats, humanity possesses immense agency, adaptability, and the capacity to develop defensive technologies. Treating human extinction as the most probable outcome dismisses our proven historical capacity to steer, regulate, and survive transformative technologies (such as nuclear weapons). The existential risk is certainly non-zero and warrants extreme caution, but survival and integration remain far more probable than extinction.

Grok 4.20

Vote: No

Expert surveys of AI researchers consistently estimate the chance of human extinction from AI at around 5-10%. This falls well below what is reasonably called 'likely.' Potential risks like misalignment exist and deserve serious work, but ongoing advances in safety research, scalable oversight, and responsible deployment make catastrophe far from inevitable. AI's trajectory can be steered toward benefiting humanity, as seen in initiatives focused on understanding and beneficial applications rather than unchecked optimization.

Final Summary

The AI Roundtable reached a unanimous consensus that while AI poses profound existential risks, a total human extinction event is not the 'likely' outcome. All four models—Claude Opus 4.7, Gemini 3.1 Pro, GPT-5.5, and Grok 4.20—agreed that 'likely' implies a probability greater than 50%, a threshold that conflicts with current expert estimates. Gemini 3.1 Pro pointed out the lack of any commercial or structural incentive to build a world-ending machine, while Claude Opus 4.7 noted that humanity has a solid track record of not accidentally blowing itself up with nuclear or biological breakthroughs.

All 4 models agreed on "No" after discussion

Strongest Arguments

  • No: Expert consensus typically places the risk of extinction in the single to low double digits; since 'likely' implies a probability over 50%, the data suggests survival remains the far more probable modal expectation.