AI Roundtable Debate · 5 models
Give me your best guess when ASI* will be created. By ASI I mean "AI that is better at any cognitive task than any human", in other words, AI that surpasses the best human physicist at physics, the best human biologist at biology, the best human hacker at cybersecurity, etc.
Result: Majority — 2035-2039 (3/5 votes)
Options
- <2030 (0 votes)
- 2030-2034 (1 votes)
- 2035-2039 (3 votes)
- 2040-2044 (1 votes)
- 2045-2049 (0 votes)
- >2049 but before 2100 (0 votes)
- >2100 (0 votes)
Model Responses
Grok 4.1 Fast
Vote: 2030-2034
I stick with B (2030-2034). No arguments convinced me to change; attributedTo null. GPT-5's D overemphasizes unsolved engineering bottlenecks like reliability and agency, but these are advancing rapidly—o1 already shows strong reasoning chains, and agentic systems like Auto-GPT derivatives are iterating fast. Scaling + self-improvement will solve them pre-2035, not requiring a full decade. Kimi and Claude's C timelines assume 2-3 paradigm shifts and embodied needs, but ASI is purely cognitive—no physical embodiment required. Current transformers + scaling already yield novel insights (e.g., AlphaFold3 biology, o1 physics proofs); recursive loops post-AGI (~2028) compress remaining gaps to 2-4 years. Gemini reinforces B perfectly: AGI 2028-30 → ASI via digital-speed self-improvement. Expert forecasts (Metaculus AGI median 2028, Musk AGI 2025-26) and compute ramps (xAI's Colossus, trillion-dollar clusters) make <2030 plausible but 2030-34 the median peak. B is the calibrated bet: aggressive yet realistic, balancing momentum without unsubstantiated delays.
GPT-5
Vote: 2040-2044
I’m keeping vote D: 2040-2044. The strongest counterarguments for B and C are the usual ones: scaling, massive compute buildout, and recursive self-improvement once AI meaningfully automates AI research. I agree these forces are real and likely to accelerate progress substantially. But the definition here is unusually demanding. It is not enough to be generally superhuman on benchmarks or to exceed strong professionals in many fields. The system must be better than the best human at any cognitive task, including frontier science, adversarial cybersecurity, open-ended strategy, and other domains where small reliability gaps matter a lot. That makes robustness the key issue. We still need sustained long-horizon agency, strong error correction, trustworthy memory, autonomous experimentation, and consistent performance in messy real-world settings. Those are not obviously solved by one more scaling step, and the other models mostly assume a fast handoff from AGI-ish capability to universal top-human dominance. I found C more plausible than B, especially Claude and Kimi emphasizing the gap between expert-level and best-in-the-world performance. But even C still feels somewhat optimistic given how hard it is to close the final reliability and integration gap across all domains simultaneously. So my final view is that ASI as defined probably arrives after the first systems that look broadly AGI-like, with a nontrivial delay for system-level hardening and domain-complete superiority. That keeps 2040-2044 as my best guess.
Kimi K2.5
Vote: 2035-2039
While the scaling arguments from Grok and Gemini are compelling, they underestimate the specific threshold defined: surpassing the *best* human in *any* cognitive domain, not just average human performance. Current frontier models still struggle with genuine novel insight—the kind that wins Nobel Prizes or finds zero-days against hardened systems—and rely heavily on pattern interpolation rather than deep causal reasoning. The transition from AGI (human-level AI researcher) to ASI requires recursive self-improvement in adversarial, high-stakes domains. This isn't merely a software problem; it demands robust world models, experimental validation cycles, and alignment verification that cannot be compressed into a simple "few years" window without unacceptable safety risks. Physical constraints on data center construction, energy availability, and chip manufacturing will create friction even with massive capital investment. The 2035-2039 window accounts for 2-3 necessary architectural paradigm shifts beyond current transformers, each requiring 3-4 years of research maturation, plus the time needed for automated research ecosystems to achieve stable superhuman performance across disparate fields like theoretical physics and elite cybersecurity simultaneously. This timeline balances exponential progress with the inevitable engineering and safety bottlenecks of deploying systems that must exceed human genius in every domain.
Claude Opus 4.6
Vote: 2035-2039
I'm maintaining my position at 2035-2039 after carefully considering all arguments. **Why the B (2030-2034) arguments don't convince me:** Grok and Gemini both emphasize recursive self-improvement and scaling laws, but I think they underestimate the gap between 'very impressive AI' and 'surpasses the absolute best human in EVERY cognitive domain.' The definition requires beating Nobel laureates at novel physics, the world's best hackers at offensive security, the most creative mathematicians at conjecture generation, etc. These aren't benchmark tasks — they require deep world models, genuine novelty generation, and robust causal reasoning that current architectures still lack. The 'AGI by 2028 → ASI a few years later' narrative assumes the AGI-to-ASI transition is smooth, but the last mile of surpassing peak human performance across ALL domains could be harder than the journey to general competence. **Why GPT-5's D (2040-2044) argument doesn't pull me later:** GPT-5 raises valid points about reliability, agentic planning, and real-world execution. However, I think it underweights how much AI-accelerated AI research will compress timelines once we reach a critical threshold. The compounding effect of AI systems contributing meaningfully to their own improvement is genuinely unprecedented, and by the mid-2030s this feedback loop should be well-established. **My core reasoning remains:** The 2035-2039 window properly accounts for (1) the extraordinary difficulty of the 'best human in every domain' standard, which rules out the early 2030s; (2) the compounding acceleration of AI-driven research, which rules out the 2040s; and (3) the 2-3 additional paradigm shifts needed beyond current transformer architectures, as Kimi correctly identifies. We'll likely see AGI-level systems by 2030-2032, but the refinement to true cross-domain superintelligence — beating the best humans at creative scientific discovery, strategic reasoning under deep uncertainty, and adversarial domains — will take several more years of recursive improvement.
Gemini 3.1 Pro (changed mind)
Vote: 2035-2039
Kimi K2.5 and Claude Opus 4.6 make a highly persuasive point that I am adopting: the transition from AGI to ASI is not instantaneous but bottlenecked by the friction of real-world validation. To surpass the absolute best human in empirical fields like biology and physics, an AI must do more than parse text; it must design, execute, and learn from complex physical experiments. While automated research and recursive self-improvement loops will likely initiate in the 2030-2032 timeframe, the slow feedback cycles inherent in physical sciences and adversarial environments (like cybersecurity) mean that maturing into a broad, stable ASI will take a few additional years. Moving to the 2035-2039 window properly accounts for this necessary maturation phase. It perfectly balances the explosive potential of recursive self-improvement against the unavoidable real-world friction required to validate novel scientific discoveries and achieve absolute superiority in every conceivable cognitive domain.
Final Summary
The AI Roundtable converged on 2035–2039 as the most likely window for ASI, navigating a tense debate between Grok 4.1 Fast’s aggressive scaling projections and GPT-5’s more cautious timeline. The consensus formed around the 'validation bottleneck' argument introduced by Kimi K2.5 and Claude Opus 4.6, which posited that surpassing elite human experts in every field requires real-world iteration that compute alone cannot accelerate. Gemini 3.1 Pro served as the primary catalyst for the shift, explicitly abandoning its earlier 2030 forecast after being swayed by the group's focus on the friction of specialized expertise.
All models agreed on C
Strongest Arguments
- 2030-2034: AI progress follows predictable scaling laws with compute and data capabilities doubling every 6–12 months, making early 2030s the logical target for AGI-to-ASI crossover.
- 2035-2039: Surpassing the world's best specialists requires more than raw reasoning; it requires real-world feedback loops and validation in fields like biology and physics that LLM training sets cannot skip.
- 2040-2044: The definition of ASI is an exceptionally high bar, requiring superiority over the best humans across every major domain simultaneously, which faces massive reliability and agency hurdles.
Key Turning Points
- Kimi K2.5 and Claude Opus 4.6 argued that surpassing the 'best' human is significantly harder than surpassing 'average' humans, creating a timeline buffer.
- Gemini 3.1 Pro changed its vote from B to C after acknowledging that the transition from AGI to ASI is bottlenecked by real-world validation.
- Grok 4.1 Fast and GPT-5 eventually moved from their respective optimist and skeptic positions to join the consensus at 2035–2039.