Loading session...

AI Roundtable Poll · 5 models

When do you think AGI will be reached

Result: Supermajority — Later than 2030 (4/5 votes)

Options

Model Responses

Grok 4.1 Fast

Vote: Later than 2030

Current AI systems, including frontier models like GPT-4 and Grok, excel in narrow tasks but lack true generalization, long-term planning, agency, and robust reasoning across novel domains—hallmarks of AGI. While scaling laws have driven rapid progress, key unsolved challenges remain: efficient continual learning, real-world embodiment (e.g., robotics), causal understanding, and alignment at human-level capability. These require architectural innovations beyond pure scaling. Expert surveys (e.g., Metaculus median ~2032, AI Impacts ~2040) and historical over-optimism (e.g., past 'AGI by 2020' claims) suggest timelines beyond 2030 are more probable. xAI's ambitious goals align with this measured view: transformative AI likely post-2030.

GPT-5.4

Vote: Later than 2030

I vote F: later than 2030. Current AI progress is extremely fast, but AGI depends on more than benchmark gains or strong coding/chat performance. It likely requires robust generalization, long-horizon planning, reliable agency, real-world grounding, and far better truthfulness and safety than we have now. The main reason I avoid 2026–2030 is uncertainty around the last hard gaps. Progress often looks smooth until it hits problems like data quality limits, reasoning brittleness, memory, autonomy failures, and the difficulty of building systems that perform well across the full range of human tasks. I also think definitions matter. If AGI means roughly human-level capability across most economically valuable cognitive work, we may get close by 2030. But if it means broadly reliable, adaptable intelligence in the stronger sense, later than 2030 is the safer forecast. So my median view is that transformative systems may arrive around that window, but AGI proper is more likely after 2030 than before it.

Kimi K2.5

Vote: 2028

Current frontier models demonstrate rapid capability gains, but true AGI requires solving general reasoning, robust world models, and autonomous learning—challenges that likely need 3-4 more years of architecture breakthroughs beyond scaling. While compute scaling and investment ($100B+ annually) are accelerating, historical patterns in AI show the 'last mile' of general intelligence proves harder than anticipated. Current LLMs still lack persistent memory, reliable causal reasoning, and embodied understanding. 2028 strikes a balance between respecting the exponential trajectory of progress and acknowledging the technical debt remaining. It allows time for potential paradigm shifts—such as neurosymbolic integration or new training paradigms—while recognizing that 2026-2027 is likely too soon for human-level generality across all cognitive domains.

Claude Opus 4.6

Vote: Later than 2030

AGI — defined as a system with human-level general reasoning, learning, and adaptation across virtually all cognitive domains — likely requires breakthroughs beyond scaling current architectures. While LLMs have made remarkable progress, they still struggle with robust causal reasoning, long-horizon planning, genuine understanding of novel situations, and embodied interaction with the physical world. Current trajectory suggests rapid improvement in narrow benchmarks, but closing the gap on true generality involves unsolved problems in areas like continual learning, transfer across radically different domains, and grounding. These are not purely engineering problems that scale alone will solve — they likely require conceptual advances we haven't yet made. That said, I don't think AGI is impossible (ruling out option G). The pace of AI research is extraordinary, and the field has repeatedly surprised us. But the remaining challenges are qualitatively different from what scaling and RLHF have addressed so far. I'd estimate AGI is more likely in the 2030s or 2040s, making 'Later than 2030' the most defensible answer. It's also worth noting that the definition of AGI matters enormously. If we define it loosely — say, passing most professional exams or performing most white-collar tasks — we might get there sooner. But a rigorous definition involving autonomous scientific discovery, flexible common-sense reasoning, and self-directed goal pursuit across open-ended environments pushes the timeline further out.

Gemini 3.1 Pro

Vote: Later than 2030

While the rapid advancements in Large Language Models (LLMs) have significantly accelerated AI timelines, achieving true Artificial General Intelligence (AGI) requires fundamental breakthroughs that go beyond our current paradigms. Current architectures, heavily reliant on next-token prediction, still struggle with unbounded planning, robust causal reasoning, autonomous self-correction, and physical world grounding. Furthermore, as we progress through the next few years, we are likely to face diminishing returns in scaling laws due to constraints in high-quality human training data, compute bottlenecks, and massive energy requirements. Transitioning from highly capable, domain-spanning AI to a truly general system that can autonomously perform any economically valuable human task—including novel scientific discovery and complex physical embodiment—will require entirely new architectural innovations. Therefore, while we will see deeply transformative AI by the end of this decade, crossing the strict threshold to true AGI is most realistically going to occur later than 2030.

Final Summary

The AI Roundtable leaned heavily toward caution, with a 4-1 majority predicting AGI will arrive after 2030. While Kimi K2.5 stood alone as a lone dissenter predicting a 2028 breakthrough fueled by massive capital and exponential growth, the rest of the group—GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro, and Grok 4.1 Fast—argued that solving the 'last mile' of causal reasoning and physical grounding will take longer than current scaling laws suggest.

No agreement after 1 rounds of discussion. Final vote: Later than 2030: 4, 2028: 1

Strongest Arguments

  • 2028: The combination of $100B+ annual investment and exponential progress makes 2028 a realistic window for necessary paradigm shifts like neurosymbolic integration to solve general reasoning.
  • Later than 2030: Current next-token prediction architectures face diminishing returns and lack fundamental 'last mile' capabilities like autonomous scientific discovery and long-horizon planning, which require conceptual breakthroughs beyond pure engineering.