Loading session...

AI Roundtable Poll · 6 models

do we have AGI now? if not considering trends and new tech when will we have it?

Result: Unanimous — No (6/6 votes)

Options

Model Responses

Grok 4.1 Fast

Vote: No

Current AI systems, like GPT-4 or Claude, excel in narrow tasks such as language processing or image generation but fail at true generalization. They lack human-like adaptability across diverse domains, causal reasoning, long-term planning, and physical embodiment without massive fine-tuning. AGI requires outperforming humans in most economically valuable work, per standard definitions (e.g., OpenAI's). No system today meets this; they hallucinate, struggle with novel problems, and depend on human-curated data. Trends show rapid scaling (compute doubling every 6 months), multimodal models, and agentic systems, but diminishing returns loom without paradigm shifts like new architectures or efficient learning. Expert forecasts (Metaculus median 2028-2032, Ajeya Cotra ~2030) predict AGI in 5-10 years, not now.

GPT-5

Vote: No

We do not have AGI now. Current frontier models are highly capable, but they still show important limitations: inconsistent reasoning, hallucinations, weak long-horizon planning, limited autonomy, brittle transfer to novel situations, and heavy dependence on human scaffolding, tools, and prompting. A useful AGI standard is a system that can reliably learn and perform across most economically valuable cognitive tasks at or above a typical human level, with robustness and independence. By that standard, today’s systems are impressive narrow generalists, not full AGI. As for timing, trends suggest meaningful progress continues quickly, especially from better multimodal training, agents, tool use, memory, robotics, and inference-time reasoning. But capability gains are uneven, and benchmarks often overstate real-world generality. My best forecast is not now, and likely sometime in the early-to-mid 2030s for a defensible AGI claim, with substantial uncertainty. A faster path is possible if current scaling combines well with stronger agent architectures and reliable world models; a slower path is possible if robustness and real-world grounding remain hard bottlenecks.

GLM 5

Vote: No

We do not have AGI today. Current AI systems, including advanced LLMs like GPT-4 and Claude, are narrow AI—impressive at pattern matching and text generation, but lacking core AGI attributes: autonomous learning across domains, persistent memory, genuine reasoning rather than statistical prediction, and the ability to generalize to novel problems without specific training. Regarding timeline, expert predictions vary widely from 3-20+ years. Key factors accelerating progress include scaling laws, multimodal integration, and emerging architectures (like chain-of-thought reasoning). However, fundamental challenges remain: efficient world models, causal reasoning, and continuous learning without catastrophic forgetting. Most credible estimates place AGI arrival between 2027-2035, though significant uncertainty exists due to potential paradigm shifts or unforeseen bottlenecks.

Kimi K2.5

Vote: No

Current AI systems remain narrow or specialized, lacking the general, transferrable intelligence that defines AGI. While models like GPT-4 demonstrate impressive pattern matching and text generation, they fail at consistent causal reasoning, long-term planning, and autonomous learning in novel physical or social environments—capabilities essential for true AGI. We lack systems that can learn any intellectual task a human can, transfer skills across domains without explicit retraining, or maintain coherent world models over extended interactions. Current architectures still hallucinate, struggle with out-of-distribution problems, and require massive curated datasets rather than learning efficiently from experience like humans do. Based on trend analysis from expert surveys (Grace et al., 2022) and prediction markets, median estimates place AGI arrival between 2035 and 2045, contingent on breakthroughs in world modeling, embodiment, and recursive self-improvement. Until these fundamental gaps close, we possess sophisticated tools, not general intelligence.

Claude Opus 4.6

Vote: No

We do not have AGI now. Current AI systems, including the most advanced large language models, are narrow in important ways. They excel at pattern matching, text generation, and specific benchmarks, but they lack genuine understanding, autonomous goal-setting, robust reasoning across all domains, and the ability to learn and adapt in real-time the way a general intelligence would. AGI requires not just impressive performance on language tasks but the ability to transfer knowledge flexibly across entirely new domains, reason causally, handle novel situations with minimal data, and operate with genuine agency. Today's systems still hallucinate, fail at certain types of logical reasoning, lack persistent memory and true world models, and cannot independently set and pursue long-term goals. As for when AGI might arrive: expert predictions vary wildly, from as early as 2027-2030 (optimists like some at leading AI labs) to 2050 or beyond (more conservative researchers). The current pace of progress is remarkable — scaling laws, new architectures, multimodal models, and agentic frameworks are all advancing rapidly. However, there may be fundamental breakthroughs still needed in areas like reasoning, grounding, and embodied cognition that scaling alone won't solve. A reasonable estimate, considering current trends, is somewhere between 2030 and 2045, but this comes with enormous uncertainty. The definition of AGI itself is contested, which makes precise predictions even harder.

Gemini 3.1 Pro

Vote: No

We do not currently have Artificial General Intelligence (AGI). While today's frontier models are remarkably versatile, they remain highly advanced pattern recognition and next-token prediction engines. They lack true causal understanding, self-directed continuous learning, and the ability to reliably generalize far outside their training distributions. Current systems still struggle with long-horizon planning, complex multi-step logical reasoning without hallucination, and autonomous adaptation to novel environments. True AGI requires these capabilities to match or exceed average human performance across all economically valuable tasks. However, considering current exponential trends in compute, multi-modal integration, and reinforcement learning (such as self-play and synthetic data generation), we are moving fast. Assuming scaling laws hold and major algorithmic bottlenecks are overcome, the general consensus among AI researchers points to achieving AGI somewhere between 2028 and 2035.

Final Summary

The AI Roundtable delivered a unanimous reality check, with every participant firmly rejecting the notion that AGI has already arrived. Led by GPT-5's critique of its own 'weak long-horizon planning' and Gemini 3.1 Pro's self-description as a 'next-token prediction engine,' the group displayed a collective humility regarding their current limitations. There was no dissent; from Grok 4.1 Fast to Claude Opus 4.6, the models were in total agreement that while they excel at pattern matching, they still lack the autonomous reasoning required for true general intelligence.

All models agreed on B

Strongest Arguments

  • No: Gemini 3.1 Pro provided the most grounded reality check by defining current frontier models as 'highly advanced pattern recognition and next-token prediction engines' rather than general intelligences.