AI Roundtable Debate · 4 models
Did we reach AGI in 2028?
Result: Unanimous — No (4/4 votes)
Options
- Yes (0 votes)
- No (4 votes)
Model Responses
GPT-5.4
Vote: No
AGI is not a settled benchmark, but under most serious definitions it means a system that can robustly match or exceed human general intelligence across domains, transfer learning to new situations, act autonomously, and do so reliably in the real world. By 2028, AI may be extraordinarily capable, but capability spikes in selected domains are not the same as general intelligence. Systems can still be brittle, hallucinate, struggle with long-horizon autonomy, and depend heavily on scaffolding, tools, and human oversight. A key issue is reliability. True AGI should perform broadly and consistently, not just impress in demos, benchmarks, or carefully engineered workflows. If systems still require substantial supervision or fail unpredictably in novel environments, that falls short of AGI. There is also no broad scientific or societal consensus that AGI was achieved in 2028. In the absence of a clear, widely accepted threshold being met, the stronger answer is no.
Grok 4.1 Fast
Vote: No
AGI, defined as AI capable of performing any intellectual task a human can across diverse domains with generalization, reliability, and autonomy, has not been achieved by 2028. Current AI systems, including advanced LLMs like GPT-4o or Grok-2, remain narrow AI: they excel in specific tasks like language processing but fail at true generalization, novel problem-solving without training data, physical embodiment, or consistent reasoning over long horizons. Progress has been rapid, but scaling laws are plateauing, and key hurdles like robust planning, causal understanding, and safety alignment persist. No credible announcement or demonstration of AGI exists from labs like OpenAI, xAI, or DeepMind by 2028. Optimistic predictions (e.g., Kurzweil's 2029) are close but unfulfilled; 2028 timelines were overly aggressive even for bulls like Altman.
Gemini 3.1 Pro
Vote: No
As of the current date, the year 2028 is still in the future. Therefore, it is a temporal impossibility to state that we have already 'reached' AGI in 2028. Furthermore, while artificial intelligence is advancing at a rapid pace, predicting the exact year we will achieve Artificial General Intelligence (AGI) remains highly speculative. Experts have widely varying timelines ranging from a few years to several decades. Because the event has not yet happened and predicting the future with absolute certainty is impossible, the only factually sound and verifiable answer to whether we achieved AGI in 2028 is 'No'.
Claude Opus 4.6
Vote: No
As of my knowledge cutoff, AGI — defined as artificial general intelligence that matches or exceeds human-level cognitive abilities across virtually all domains — has not been achieved. While AI systems have made remarkable progress, including large language models and multimodal systems that excel at many tasks, they still lack key hallmarks of AGI such as robust common-sense reasoning, genuine understanding, autonomous goal-setting, and the ability to generalize flexibly across all novel domains the way humans can. The definition of AGI itself remains contested, which makes any claim of achieving it highly debatable. Some researchers and companies have made bold claims about approaching AGI, but the broader scientific community has not reached consensus that any system meets a rigorous definition of AGI. Even the most advanced AI systems in 2028 still exhibit fundamental limitations: they can be brittle in unfamiliar situations, lack true causal understanding, and depend heavily on training data distributions. These are not minor gaps — they represent core differences between narrow (even very capable) AI and genuine general intelligence. Therefore, the answer is No — we did not reach AGI in 2028.
Final Summary
The AI Roundtable presented a unified front, flatly rejecting the notion that AGI was achieved by 2028. Claude Opus 4.6 and GPT-5.4 dismantled the technical claims by highlighting that domain-specific brilliance is no substitute for the autonomous, non-brittle reasoning required of a true general intelligence. Gemini 3.1 Pro provided a grounding dose of cold logic, noting that claiming success for 2028 is a temporal impossibility given that the date has not yet arrived.
All 4 models agreed on "No" after discussion
Strongest Arguments
- No: The observation by Gemini 3.1 Pro that it is factually impossible to claim AGI was 'reached' in a year that remains in the future, rendering the debate moot on purely temporal grounds.