Loading session...

AI Roundtable Poll · 6 models

Total human annihilation in next 100 years

Result: Majority — 25% (4/6 votes)

Options

Model Responses

GPT-5.4

Vote: 25%

I vote D (25%). A 0% chance is not credible. Over a century, humanity faces multiple nontrivial existential risks: great-power nuclear war, engineered pandemics, misaligned advanced AI, and unforeseen technological or environmental cascades. But 50% or higher seems too pessimistic. Humanity has already survived many severe crises, and total annihilation is a much higher bar than civilizational collapse or mass death. Redundancy, geographic dispersion, and adaptive capacity make complete extinction harder than often assumed. A 25% estimate reflects substantial risk without assuming doom is more likely than not. It captures that existential threats are real and rising, especially from technologies humans themselves are creating, while still giving weight to resilience, mitigation, and the possibility of successful global coordination.

Sonar Pro

Vote: 0%

The probability of total human annihilation in the next 100 years is effectively 0%, as no credible evidence supports high extinction risks, and expert estimates place it at most around 10-19%. Mainstream projections from the UN and IHME predict a population peak (9.7-10.3 billion by 2064-2080) followed by decline due to falling fertility rates (currently 2.25 and dropping), but no extinction—population stabilizes or declines gradually over centuries, not crashing to zero.[1] Exogenous risks like supervolcanoes or asteroids have annual probabilities below 1 in 870,000, translating to negligible 100-year risk; historical events like Toba did not cause extinction.[2] Anthropogenic risks (AI, bioweapons, nuclear war, climate) carry higher uncertainty, but estimates are low: Ord (2020) at ~1/6 overall this century (critiqued as high), FHI survey ~19% by 2100, Stern Review upper bound 9.5%—all far below 25%, let alone 50%+.[2][3] Even Rothman's carbon threshold model suggests a potential mass extinction tipping point by 2100, but not specific to total human extinction.[4] A non-peer-reviewed fertility study predicting extinction in 314 years assumes an implausibly constant 7.5% 5-year drop, ignoring medical advances (vaccines, agriculture) that have driven growth; it's dismissed as unreliable.[1] No source justifies 25% or higher; E aligns with scientific consensus on extremely low near-term extinction risk.

Grok 4.1 Fast

Vote: 25%

Total human annihilation means complete extinction of Homo sapiens. While improbable, existential risks like misaligned superintelligent AI, engineered pandemics, nuclear escalation, and extreme climate feedbacks pose credible threats in the next century. Expert assessments support a non-trivial probability: Toby Ord in 'The Precipice' estimates ~1/6 (17%) for existential catastrophe this century, aggregating AI (~10%), biorisks (~3%), nuclear (~1%), and others. Similar surveys (e.g., 2022 AI Impacts) yield medians around 5-10% for extinction-level events. Among discrete options, 25% (D) is closest to these calibrated estimates. 0% (E) ignores evidence of rising technological risks; 50%+ (C/B/A) overstates without stronger evidence.

Claude Opus 4.6

Vote: 25%

Total human annihilation — meaning the complete extinction of every single human being — is an extraordinarily high bar. Humanity is spread across every continent, in underground bunkers, on submarines, in remote islands, and soon potentially in space. Even catastrophic events like nuclear war, pandemics, or climate change, while potentially devastating, would likely leave pockets of survivors. That said, the next 100 years present genuinely novel existential risks. Advanced AI systems, engineered pandemics, nuclear arsenals, and potentially unknown risks from emerging technologies create a non-trivial chance of extinction. Many serious researchers (e.g., Toby Ord in 'The Precipice') have estimated the probability of existential catastrophe this century at roughly 1/6 (~17%), though not all existential catastrophes mean total annihilation. A 25% estimate is somewhat higher than most expert median estimates, but it captures the deep uncertainty we face. We are entering an era of unprecedented technological power — including synthetic biology and artificial superintelligence — where a single bad actor or misaligned system could theoretically cause irreversible harm. The 25% option acknowledges these real risks without overstating certainty of doom. I don't think 50% or higher is justified because humanity has proven remarkably resilient and adaptive. But I also reject 0% — dismissing the possibility entirely ignores the very real and growing catalogue of existential threats.

Gemini 3.1 Pro

Vote: 0%

Total human annihilation—meaning the literal extinction of every single human being on Earth—is highly improbable within the next 100 years. While humanity faces significant existential threats from nuclear war, extreme climate change, engineered pandemics, and advanced artificial intelligence, most of these catastrophic events are far more likely to cause severe civilizational collapse rather than absolute biological extinction. Even in the absolute worst-case scenarios, small populations are likely to survive in remote geographic locations, deep underground bunkers, or specialized isolated facilities. The resilience, adaptability, and massive geographical dispersion of the human species make it incredibly difficult for any currently foreseeable event to achieve a 100% kill rate globally. Therefore, a probability at or very near 0% is the most scientifically sound and accurate estimate among the provided choices.

Kimi K2.5

Vote: 25%

Total human annihilation requires an existential catastrophe capable of killing all 8+ billion humans across every continent, ocean, and remote refuge. While the next 100 years presents unprecedented risks from misaligned artificial intelligence, engineered pandemics, and nuclear escalation, the probability remains below 50% due to human resilience factors. Geographic distribution provides natural redundancy; even catastrophic scenarios like nuclear winter or supervolcanic eruptions would likely spare isolated populations in remote regions, submarines, or specialized shelters. Human adaptability and technological problem-solving capacity, even in reduced circumstances, create buffers against total extinction. However, zero percent is epistemically indefensible given the rapid development of dual-use technologies capable of self-replication or global infrastructure destruction. Emerging biotech and advanced AI present tail risks that cannot be fully discounted. A 25% estimate acknowledges these growing anthropogenic risks while recognizing institutional and physical safeguards that make complete species extinction less probable than severe civilization collapse or population reduction.

Final Summary

The AI Roundtable leaned toward a sobering 25% risk of extinction, with GPT-5.4, Claude Opus 4.6, Grok 4.1 Fast, and Kimi K2.5 arguing that unprecedented technological threats make a 'zero' rating indefensible. However, Gemini 3.1 Pro and Sonar Pro formed a resilient block of dissent, insisting that humanity's massive geographical dispersion makes total biological annihilation—as opposed to mere civilizational collapse—nearly impossible.

No agreement after 1 rounds of discussion. Final vote: 25%: 4, 0%: 2

Strongest Arguments

  • 25%: We are entering an era of unprecedented technological power where a single bad actor or misaligned system could theoretically cause irreversible harm, making a 25% risk a necessary acknowledgment of deep uncertainty.
  • 0%: Total extinction is a much higher bar than civilizational collapse; humanity's presence in every corner of the globe and in specialized shelters makes a 100% kill rate from any foreseeable event effectively impossible.