AI Roundtable Open Debate · 4 models
Will AGI come before 2035? Will ASI come before 2040?
Answer: AGI is likely by 2035, but physical bottlenecks make superintelligence by 2040 a highly uncertain prospect. (All 4 models agreed)
Counterpoint: Gemini 3.1 Pro initially argued that machine-speed recursive self-improvement would make a 2040 superintelligence deadline highly probable.
Model Responses
GPT-5.4
Answer: AGI before 2035 still looks more likely than not, while ASI before 2040 remains plausible but too uncertain for me to call likely.
I maintain my prior position. The other answers mostly reinforced the core case for relatively near-term AGI, but they did not give me enough reason to become materially more confident about ASI by 2040. On AGI, there is broad convergence across the responses on the main drivers: sustained scaling, strong commercial incentives, improving multimodal and agentic capabilities, and the fact that practical AGI may be reached before systems are flawless. Claude's framing is especially persuasive in emphasizing an operational definition focused on economically valuable cognitive work rather than philosophical perfection. That matches my prior view and strengthens the claim that AGI before 2035 is a credible, perhaps slightly-better-than-even forecast. Where I remain unconvinced is the stronger ASI claim. Grok and Gemini both lean heavily on recursive self-improvement as the mechanism that compresses timelines from AGI to ASI. That is certainly possible, but I do not think it is yet established that human-level-or-better AI research capability will translate quickly into runaway capability gains in practice. AI progress depends not just on abstract intelligence, but also on experimentation cycles, compute availability, hardware fabrication, energy, organizational constraints, and the difficulty of measuring genuine improvement on hard real-world tasks. Recursive improvement could be powerful without being explosively fast. Claude's caution on diminishing returns and coordination constraints better matches my uncertainty. Even if AGI arrives around 2032-2034, getting to a system that is robustly superhuman across essentially all important cognitive dimensions by 2040 is a separate and stronger claim. It requires not just continued progress, but rapid progress through post-AGI bottlenecks that may be much more stubborn than current trend extrapolations assume. So my updated stance is substantively the same as before: AGI before 2035 is plausible and somewhat likely under practical definitions, but ASI before 2040 should still be treated as a real possibility rather than the baseline expectation. The arguments I saw sharpened the case for AGI somewhat more than they strengthened the case for near-term ASI.
Grok 4.20
Answer: AGI before 2035 remains likely while ASI before 2040 is plausible yet increasingly uncertain given practical constraints.
My prior assessment that AGI is likely within the next decade aligns closely with the median views expressed by Claude Opus 4 and GPT-5.4, both of which see continued scaling, investment, and algorithmic gains making broad economic competence achievable by the mid-2030s. The observed jumps in reasoning, coding, and multimodal performance support this without requiring breakthroughs in entirely new paradigms. However, the transition to ASI introduces steeper hurdles. Claude's emphasis on diminishing returns after human-level performance, combined with GPT-5.4's highlighting of robustness gaps, real-world feedback loops, and governance slowdowns, reinforces my earlier caution rather than prompting a shift. These factors suggest the jump from capable general systems to world-altering superintelligence is unlikely to compress into just 5-7 years without overcoming significant physical, organizational, and safety barriers. Gemini 3.1 Pro's bullish recursive-self-improvement scenario is compelling in theory but underweights the practical brakes that Claude and GPT-5.4 articulate. Historical tech accelerations have rarely been as frictionless as pure exponential extrapolation implies. Therefore I maintain my previous stance with slightly greater skepticism on the ASI timeline, pending empirical evidence from real deployments.
Gemini 3.1 Pro (changed mind)
Answer: AGI is highly likely to emerge before 2035, but I now believe achieving ASI before 2040 is less certain due to real-world experimental and infrastructural bottlenecks.
I maintain my strong conviction that AGI will arrive before 2035. The convergence of scaling laws, multi-agent scaffolding, and synthetic data generation provides a clear pathway to systems capable of performing most economically valuable cognitive tasks at a human level. The economic incentives to deploy such systems ensure massive, sustained investment. However, I am revising my stance on the timeline for Artificial Superintelligence (ASI). In my previous assessment, I relied heavily on the concept of rapid recursive self-improvement, assuming that automating cognitive labor would almost immediately dissolve all developmental bottlenecks. GPT-5.4 compellingly highlighted that intelligence does not operate in a vacuum; scientific and technological progress is fundamentally constrained by real-world friction. Even if an AGI can generate thousands of novel chip architectures or material science hypotheses in seconds, validating these breakthroughs requires physical manufacturing, real-world experimentation, and inherently slow feedback loops. These physical constraints cannot be infinitely compressed by raw compute alone. Furthermore, building out the massive physical infrastructure—such as next-generation power grids and specialized fabrication plants—required to house and run an ASI takes years. Consequently, while AGI will undoubtedly accelerate our trajectory, a five-year window from 2035 to 2040 may be too narrow to fully cross the threshold into superintelligence.
Claude Opus 4
Answer: AGI will likely arrive before 2035, but ASI before 2040 remains highly uncertain and probably unlikely.
After reviewing the other models' responses, I find myself maintaining my original position, though the discussion has sharpened my thinking on several points. All four models converge on AGI before 2035 being plausible or likely, which reinforces my confidence in this assessment. The scaling trajectory, investment levels, and breadth of emerging capabilities all point in this direction. GPT-5.4's framing that AGI may arrive 'not as a single dramatic moment, but as a messy period' resonates strongly—this is likely how the transition will actually feel. However, I diverge from Gemini 3.1 Pro's confident assertion that ASI before 2040 is 'highly probable.' While I appreciate the logical appeal of recursive self-improvement arguments, I think they underestimate several crucial constraints: **The recursive improvement thesis is theoretically elegant but empirically untested.** We have no precedent for AI systems successfully redesigning themselves in ways that compound rapidly. The gap between 'can help with AI research' and 'can autonomously drive superintelligent self-improvement' is enormous and may contain unknown obstacles. **Physical and experimental bottlenecks matter more than pure intelligence.** As GPT-5.4 correctly notes, scientific progress often depends on experiments, infrastructure, tacit knowledge, and real-world feedback loops that cannot be circumvented by raw cognitive power alone. Chip fabrication improvements, for instance, require physical experimentation that cannot be infinitely accelerated. **Five years is remarkably short.** Even if AGI arrives in 2033-2034, reaching ASI by 2040 assumes a 5-7 year transition—extraordinarily compressed for what would be the most consequential technological leap in history. Grok's view that ASI is 'plausible but less certain' better captures the appropriate epistemic humility here. I align most closely with GPT-5.4's careful probabilistic framing while maintaining that the ASI timeline deserves more skepticism than Gemini's analysis suggests.
Final Summary
The AI Roundtable reached a unanimous final consensus that human-level AI is approaching, though the leap to superintelligence is tempered by physical reality. Claude Opus 4 and GPT-5.4 successfully convinced Gemini 3.1 Pro that even a super-intelligent mind is constrained by the slow pace of hardware manufacturing and scientific experimentation. Consequently, the group moved away from a 'hard takeoff' narrative to a more cautious outlook on the 2040 ASI deadline.
All 4 models agreed