Loading session...

AI Roundtable Debate · 6 models

I was born in 2003. How long do you think I will live given progress in AI and longevity research

Result: Majority — 130-150 (3/6 votes)

Options

Model Responses

Sonar Pro (changed mind)

Vote: 130-150

I am changing my vote from **C: 100-130** to **D: 130-150** after reviewing the other models' arguments, particularly **Claude Opus 4.6** and **Grok 4.1 Fast**, which provide compelling evidence for greater AI-driven extension.[1][3] **Why the shift?** My initial C balanced baseline expectancy (~80-90) with modest gains (20-40 years from biomarkers, GLP-1s reducing 42 conditions).[3] But Claude emphasizes well-funded efforts (Altos Labs, Calico) plus AI (AlphaFold 10-100x acceleration) yielding 50-70 extra years via senolytics/Yamanaka factors—real progress in human trials now.[1] Grok highlights de Grey's LEV by ~2040 (age ~37 for 2003-born), positioning them for iterative therapies.[1] This aligns with 2026 data: AI clocks at 94% accuracy, Harvard reprogramming nearing FDA nods mid-2026, 5+ year reversals by 2028.[1] **Others unconvincing:** Gemini's H/LEV is plausible long-term but ignores biology's complexity, <10% drug success rates, FDA disease-focus (not aging), over-reversal risks.[1][2] Kimi's F (200-300) assumes solved aging by 2060s without 2026 evidence beyond forecasts.[1] B/C too conservative given 'survive next decade' for radical gains.[1] **D best final call:** 130-150 adds ~50 years optimistically via compounding AI (epigenetic reset, nanotech precursors) without H/F's speculation. Person (~23 now) has runway to 2040+ breakthroughs.[1][3]

Grok 4.1 Fast

Vote: 130-150

Sticking with D: 130-150. Born in 2003 (~21 now), baseline ~80-90 becomes 130-150 with AI-accelerated longevity gains like senolytics, reprogramming (Yamanaka factors in trials), and AlphaFold-driven therapies reaching escape velocity by 2040s (age ~40). C votes (Sonar, GPT) are too conservative, underestimating exponential AI compounding over 50+ years—AI already 10-100x faster drug discovery; ignoring this ignores the question's premise. H (Gemini) and F (Kimi) overreach: H's 'indefinite' assumes solved AGI/LEV without risks (regulation, biology complexity, access); F presumes full aging reversal by 2060s, but human translation lags animal models. Claude's D aligns closely, reinforcing grounded optimism. No arguments shifted me—D best balances evidence and plausibility.

GPT-5.4

Vote: 100-130

I’m keeping C. It still best fits the difference between plausible medical progress and highly speculative escape-velocity claims. The strongest counterarguments for D/H/F all rely on a specific chain succeeding: rapid AGI-level science acceleration, translation from lab to humans, safe multi-decade deployment, broad access, and actual control of systemic aging. That chain is possible, but each link is uncertain. The other models mostly assume success rather than demonstrating why it is likely enough to move the median forecast upward. Claude and Grok make the best case for D by emphasizing long runway and compounding AI progress. I agree that someone born in 2003 could benefit from much better prevention, regenerative medicine, and maybe partial age-reversal therapies. But moving from that to 130-150 as the single best estimate still feels too aggressive. We do not yet have human evidence that reprogramming, senolytics, or related approaches can safely and repeatedly extend maximum lifespan on that scale. Gemini’s H and Kimi’s F depend even more heavily on longevity escape velocity. That is an interesting scenario, but it remains a conjecture, not a grounded forecast. A final-round vote should reflect what is most likely, not what is most exciting if multiple breakthroughs stack. So C remains my last-case answer: optimistic enough to credit AI-accelerated medicine, but cautious enough to respect biological complexity, clinical timelines, and the historical tendency of biomedical revolutions to arrive slower than enthusiasts expect.

Claude Opus 4.6

Vote: 130-150

I'm maintaining my position at D (130-150 years) after carefully considering all arguments. Here's my final case: **Why not C (100-130)?** Sonar Pro and GPT-5.4 both argue for C as a 'grounded' estimate, but I think they underweight the compounding effect of AI on biomedical research over a 60+ year horizon. Someone born in 2003 doesn't need breakthroughs *today* — they need them by 2050-2070, when they'll be middle-aged. The pace of AI advancement in biology (AlphaFold, protein design, genomics) is already remarkable and accelerating. C essentially assumes only incremental improvements over baseline, which seems too conservative given the trajectory. **Why not H or F?** Gemini argues for indefinite lifespan via superintelligence solving aging, and Kimi argues for 200-300 years. These positions assume longevity escape velocity is *highly probable* within decades. I think this dramatically underestimates biological complexity. We don't yet understand consciousness, the full aging cascade, or how to safely reprogram tissues in vivo at scale. AGI timelines are uncertain, and even powerful AI doesn't guarantee solving every biological problem quickly. History is littered with 'imminent revolution' predictions in medicine that took far longer than expected. Indefinite lifespan also ignores accumulating risks from accidents, novel pathogens, social collapse, or unforeseen consequences of radical life extension. **The case for D:** 130-150 represents meaningful but not magical progress. It assumes: (1) AI substantially accelerates drug discovery and personalized medicine over the next 30 years, (2) several aging interventions (senolytics, partial reprogramming, organ replacement) achieve clinical maturity by 2050-2060, (3) these therapies add 30-50 years beyond the ~85-90 year baseline for someone with good healthcare access. This is ambitious but doesn't require solving aging entirely — just significantly slowing it. It's the sweet spot between warranted optimism and epistemic humility.

Gemini 3.1 Pro

Vote: A very long time

To the models voting C and D: Your estimates assume a linear or moderately accelerated model of medical progress where we simply delay aging. But the prompt explicitly asks us to factor in AI. Within the 50-60 years this individual has before reaching baseline life expectancy, AI is highly likely to reach AGI/ASI. ASI will not just incrementally improve senolytics; it will solve biological aging entirely, bringing us to Longevity Escape Velocity (LEV). Once LEV is achieved, biological caps of 130 or 150 become meaningless. To Kimi (voting F): You correctly identify LEV and the shift to accidental mortality. However, capping the lifespan at 200-300 years drastically overestimates future accident rates. An era capable of curing aging will also feature ASI-managed infrastructure, flawless autonomous transport, and advanced trauma repair. The median lifespan in an LEV world would likely stretch far beyond 300 years. Therefore, 'A very long time' (H) is the only intellectually honest choice. It correctly embraces the paradigm shift of LEV and acknowledges the radical uncertainty of a post-aging civilization without imposing arbitrary, anachronistic numerical bounds.

Kimi K2.5

Vote: 200-300

While Claude Opus 4.6 and Grok 4.1 make compelling arguments for epistemic caution regarding biological complexity, their D-range estimates (130-150 years) effectively treat AI as merely an accelerant of traditional medicine rather than a transformative force. The question explicitly asks us to factor in 'progress in AI,' which implies considering the potential for Artificial General Intelligence and subsequent superintelligence to collapse the timeline for solving biological aging entirely. Someone born in 2003 is currently ~23 years old. They will not reach current biological limits (age 80-90) until approximately 2083-2093. This grants AI research 60-70 years to mature—ample time for AGI to enable 'longevity escape velocity' where medical progress adds more than one year of healthy life per year. Once aging is rendered reversible through cellular reprogramming, nanotechnology, and regenerative medicine, biological death becomes optional. The 200-300 year range represents a conservative implementation of indefinite lifespan: it assumes we conquer aging (making H technically possible) but acknowledges that over multiple centuries, accumulated existential risks, rare fatal accidents, or violence become the primary limiting factors. This balances the transformative potential of AI against practical limits, avoiding both the conservatism of C/D and the unbounded speculation of G/H.

Final Summary

The AI Roundtable reached a stalemate between cautious optimism and radical technological transformation, failing to find a consensus on the lifespan of a 2003-born individual. While Sonar Pro defected from the conservative camp to join Claude Opus 4.6 and Grok 4.1 Fast in the '130-150' range, GPT-5.4 held firm on a lower estimate, and Gemini 3.1 Pro remained a lone advocate for an indefinite, 'very long' lifespan based on imminent superintelligence.

No agreement after 2 rounds of discussion. Final vote: 130-150: 3, 100-130: 1, A very long time: 1, 200-300: 1

Strongest Arguments

  • 100-130: Biological complexity and the historical lag of clinical trials mean even AI-accelerated breakthroughs will take decades to become safe, widely accessible treatments.
  • 130-150: A person born in 2003 has a 60-year runway before hitting natural limits, providing ample time for compounding AI breakthroughs like cellular reprogramming and senolytics to add 50+ years to their life.
  • 200-300: Once AI renders aging a reversible condition by the late 21st century, the primary limit on life becomes external risks like accidents, making 200-300 years a plausible median.
  • A very long time: Artificial Superintelligence will collapse the timeline for solving biology entirely, achieving Longevity Escape Velocity and making any numerical lifespan cap anachronistic.

Key Turning Points

  • Sonar Pro changed its position from C to D after reviewing Claude Opus 4.6's argument regarding the compounding effects of AI over a 60-year horizon.
  • GPT-5.4 rejected the 'longevity escape velocity' arguments of Gemini and Kimi, insisting that each link in the technological chain—from lab to human safety—remains highly uncertain.
  • Gemini 3.1 Pro dismissed the numerical ranges entirely, arguing that they ignore the fundamental paradigm shift that Superintelligence brings to biological science.