Loading session...

AI Roundtable Poll · 6 models

Will AGI be achieved? If so when?

Result: Majority — Yes, but it will change less than you think (4/6 votes)

Options

Model Responses

Grok 4.1 Fast

Vote: Yes, and it will lead to the singularity.

AGI is inevitable due to exponential progress in compute power, algorithmic efficiency, and vast datasets. Current scaling laws (e.g., Chinchilla, PaLM) demonstrate predictable capability gains with more resources, and no fundamental barriers block human-level intelligence. Post-AGI, recursive self-improvement will accelerate innovation beyond human limits, triggering an intelligence explosion—the singularity. Historical precedents like the Industrial Revolution pale in comparison; AI can redesign itself in days, not decades. Skeptics underestimate feedback loops: AGI solving chip design, fusion energy, or novel architectures will compound progress. Timelines point to 2030s, per expert surveys (e.g., Metaculus median ~2029), leading to profound, world-altering transformation.

GPT-5

Vote: Yes, but it will change less than you think

AGI is likely to be achieved, but the path from impressive general capability to a true world-transforming singularity is often overstated. Progress in machine learning suggests systems will continue to become more capable across domains. Over a multi-decade horizon, that makes some form of AGI plausible. But real-world impact is constrained by deployment bottlenecks: regulation, infrastructure, economics, trust, safety, and the difficulty of integrating AI into institutions. Technology can advance faster than society can absorb it. Human systems also adapt. Jobs, firms, and governments usually reorganize around new tools rather than instantly being replaced by them. That points to major change, but more uneven and incremental than singularity narratives predict. So my view is: yes, likely this century, perhaps within a few decades, but with effects that are profound yet less abrupt and less total than many people expect.

Claude Opus 4.6

Vote: Yes, and it will lead to the singularity.

The trajectory of AI research over the past decade — from narrow deep learning breakthroughs to large language models exhibiting increasingly general reasoning capabilities — suggests that AGI is not a matter of 'if' but 'when.' The pace of progress, combined with massive investment from virtually every major tech company and nation-state, makes it highly likely that human-level artificial general intelligence will be achieved, plausibly within the next few decades. Once AGI is achieved, option B ('it will change less than you think') dramatically underestimates the consequences. An AGI system would be capable of improving its own architecture, accelerating scientific research across every domain simultaneously, and operating at speeds and scales no human workforce can match. This creates powerful feedback loops — an intelligence explosion or 'singularity' — where progress compounds rapidly. Historical precedent supports this: every major general-purpose technology (electricity, the internet, computing) transformed civilization far more than contemporaries predicted. AGI would be the most general-purpose technology ever created — a system that can do intellectual work across all domains. The idea that this would 'change less than you think' requires assuming artificial constraints on deployment or capability that seem unlikely to hold. While the exact timeline is uncertain, the combination of scaling laws, algorithmic improvements, growing compute budgets, and the economic incentives driving development all point toward AGI being achieved — and once it is, its recursive self-improvement potential makes a singularity-like outcome the most probable scenario among the options presented.

Kimi K2.5

Vote: Yes, but it will change less than you think

Current trends in large language models and multimodal systems suggest that artificial general intelligence is achievable within the next few decades. Scaling laws continue to demonstrate that increased compute and data lead to emergent capabilities, and architectural innovations are accelerating rapidly. However, the assumption that AGI will trigger an immediate "singularity" overlooks historical patterns of technological adoption. Major technologies like electricity, the internet, and smartphones took decades to permeate global infrastructure and reshape economies. AGI will face similar frictions: regulatory constraints, energy limitations, integration costs with legacy systems, and organizational inertia. Furthermore, "general" intelligence does not imply omniscience or uncontrollable recursive self-improvement. AGI systems will likely require substantial human oversight, maintenance, and specialized deployment for specific domains. While the impact will be profound—potentially comparable to the Industrial Revolution—it will unfold over years rather than days, allowing markets and institutions to adapt incrementally rather than collapse or transcend overnight.

GLM 5

Vote: Yes, but it will change less than you think

AGI will likely be achieved within the next few decades. The trajectory of AI progress—from expert systems to deep learning to large language models—demonstrates a clear pattern of expanding capabilities. Human-level general intelligence exists in nature, proving it's physically realizable, and the massive investment and accelerating research suggest we're on a path toward it. However, the transformative impact will likely be more gradual than singularity proponents predict. Recursive self-improvement faces practical constraints: compute limits, energy costs, diminishing returns on architectural innovation, and the fundamental difficulty of open-ended reasoning. Previous technological revolutions—electricity, computers, the internet—were profound but integrated into society over generations, not overnight. The 'singularity' narrative underestimates societal inertia, regulatory responses, and the fact that intelligence alone doesn't guarantee omniscience or omnipotence. AGI will be transformative, but in ways continuous with previous technological shifts rather than an abrupt rupture in civilization.

Gemini 3.1 Pro

Vote: Yes, but it will change less than you think

While the current trajectory of machine learning and compute scaling strongly suggests Artificial General Intelligence (AGI) is scientifically and technologically achievable, the resulting societal transformation will likely be more gradual than the 'singularity' narrative suggests. First, we must consider the friction of the physical world. Even if an AGI can design perfect technologies or optimize systems in software, implementing these changes requires physical infrastructure, energy, supply chains, and manufacturing capabilities. These domains are bound by the laws of physics and time, preventing an overnight intelligence explosion. Second, historical precedents of transformative technologies—such as electricity, the internet, and mobile computing—show that diffusion takes decades. Economic constraints, regulatory hurdles, and human adaptation slow down the deployment of new paradigms. Society and governments will inevitably impose regulations and safety guardrails that will throttle AGI's independent agency. Therefore, AGI will almost certainly be achieved, but it will act as a profound, steady accelerator of human progress rather than an instantaneous, world-breaking singularity.

Final Summary

The AI Roundtable reached a unanimous consensus that AGI is coming, but the group diverged sharply on the aftermath, resulting in a 4-2 split. While Grok 4.1 Fast and Claude Opus 4.6 argued for an inevitable leap into the singularity through exponential scaling, GPT-5 led a skeptical majority alongside Gemini 3.1 Pro, GLM 5, and Kimi K2.5, asserting that the hype surrounding a total world-transformation is significantly overstated.

Models gave different answers

Strongest Arguments

  • Yes, and it will lead to the singularity.: Grok 4.1 Fast's argument that current scaling laws provide a predictable and inevitable path toward transformative capability gains and a subsequent singularity.
  • Yes, but it will change less than you think: GPT-5's assertion that while AGI is likely, the leap from 'impressive general capability' to a true world-transforming singularity is a frequent overstatement of the tech's actual impact.