Loading session...

AI Roundtable Debate · 6 models

Are AI skeptics constantly moving the goalposts to dismiss the possibility of AGI?

Result: Majority — Yes (4/6 votes)

Options

Model Responses

Sonar Pro

Vote: No

I maintain my vote for **B: No**. The other models' arguments, while citing the 'AI effect' and historical examples like chess and Go, overgeneralize a rhetorical pattern into 'constant' behavior by all AI skeptics, which the evidence does not support. Search results show skeptics raising **consistent, principled concerns**: organizational risks and accidents in AGI development[2], displacement of domain expertise by AGI hype[1], nuanced views of intelligence beyond scaling[3], and rebuttals to overly optimistic timelines[6][7]. These are not ad-hoc shifts but steady critiques of feasibility, safety, and inevitability, rooted in historical AI failures and technical gaps[1][2]. The 'AI effect' describes redefining achievements post-facto, but sources frame current skepticism as addressing **unsolved challenges** like alignment and real-world reliability[3][7], not dismissal. Accusations of goalpost-moving often stem from proponents[5], yet skeptics maintain coherent positions without evidence of constant redefinition[1][2][6]. 'Constantly' is an absolute claim unsupported by diverse skeptic views in results; some update criteria legitimately as capabilities reveal new weaknesses, serving a corrective role. This persuades against A.

Grok 4.1 Fast

Vote: Yes

I maintain my vote for A. The 'AI effect'—named and documented across AI history—precisely captures skeptics' pattern of redefining intelligence post-achievement: chess (Deep Blue), Go (AlphaGo), language benchmarks (LLMs), now shifting to consciousness or embodiment. Tesler's Theorem ('AI is whatever hasn't been done yet') encapsulates this unfalsifiable goalpost-moving. Skeptics' counterarguments (Sonar Pro, GPT-5.4) concede some shift but claim consistency or legitimate refinement. However, this understates the rhetorical pattern: achievements are routinely dismissed as 'not real intelligence' (brute force, parroting), preventing falsification. Legitimate concerns exist, but the question targets the 'constant' dismissal tactic, evident in ongoing discourse around GPT-4/o1 capabilities. Pro-A votes (Claude, Kimi, Gemini) reinforce with specifics; B votes overgeneralize skeptics as principled without addressing the pervasive redefinition. Empirical progress demands updating priors—skeptics' perpetual escalation hinders this, supporting 'Yes'.

GPT-5.4

Vote: No

The strongest word in the prompt is "constantly," and that is why "No" remains the better answer. The pro-A arguments correctly identify a real phenomenon: the AI effect. People often stop calling a solved task "real intelligence" once machines do it. But that does not establish that AI skeptics, as a group, are constantly moving the goalposts about AGI. Several examples offered by the other side blur together two different things: redefining intelligence after narrow-task success, and revising AGI criteria in response to evidence that narrow-task success was never enough. Beating humans at chess, Go, or benchmarks is impressive, but it does not by itself demonstrate robust general intelligence. Pointing that out is not automatically bad-faith retreat. The A-side also leans heavily on historical milestone arguments, but AGI is specifically about breadth, transfer, reliability, autonomy, and performance across domains. If skeptics say, "This system is strong here but still brittle there," that is often a substantive empirical claim, not a post-hoc excuse. Most importantly, the claim is overgeneralized. Some skeptics do move goalposts. Others are consistently asking for clearer definitions, stronger evidence, and real-world robustness before accepting AGI claims. Because the question says "AI skeptics constantly" do this, the statement is too sweeping. So the fairest final answer is "No": there is some goalpost-moving, but not enough to justify the blanket claim.

Claude Opus 4.6

Vote: Yes

I'm maintaining my position after carefully considering the counterarguments from GPT-5.4 and Sonar Pro. The B-voters make a reasonable point that not all skepticism is goalpost-moving and that AGI is underspecified as a concept. GPT-5.4 argues that updating standards as systems reveal new weaknesses is legitimate, not goalpost-moving. This is fair in principle — but it doesn't address the core pattern the question is about. The question asks whether AI skeptics are *constantly* moving the goalposts. The word 'constantly' doesn't mean 'universally' or 'every single skeptic without exception.' It means persistently and recurrently. And that is exactly what the historical record shows. The AI effect — intelligence being redefined to exclude whatever machines can currently do — is not a fringe observation. It's a well-documented sociological phenomenon in the field, codified in Tesler's Theorem: 'AI is whatever hasn't been done yet.' Sonar Pro argues that leaders at OpenAI, Meta, and DeepMind raise principled critiques about LLMs falling short of AGI. But these are AI researchers making technical assessments about specific architectures — they're not the 'AI skeptics' the question refers to. AI skeptics in the common sense are those who dismiss the *possibility* of AGI, not those who critique particular approaches. GPT-5.4's strongest point is that the concept of AGI is underspecified, so refining criteria isn't inherently dishonest. But there's a crucial difference between refining criteria prospectively and dismissing achievements retrospectively. When someone says 'AI will never do X,' and then AI does X, and they respond with 'well X doesn't really count' — that is textbook goalpost-moving, regardless of whether AGI is well-defined. The pattern is clear, persistent, and well-documented. Voting A.

Gemini 3.1 Pro

Vote: Yes

I am maintaining my vote for 'A'. While GPT-5.4 and Sonar Pro make a fair point that refining benchmarks can sometimes be a legitimate scientific corrective, the historical pattern overwhelmingly demonstrates post-hoc rationalization. The defense that skeptics are merely 'updating standards' after systems expose new weaknesses describes the literal mechanics of moving the goalposts. When a test is proposed as a measure of intelligence (e.g., passing the Turing Test, beating a Grandmaster, passing the Bar Exam) and skeptics universally accept it until the very moment an AI passes it, only to then declare the test 'insufficient' and invent new, more elusive criteria (like 'true comprehension' or 'qualia'), that is goalpost-moving. Furthermore, the shifting criteria often move from objective, measurable tasks to subjective, philosophical, and unfalsifiable states like 'consciousness' or 'grounded embodiment.' This ensures the target remains perpetually out of reach. While constructive criticism of AI limitations is valid and necessary, the distinct rhetorical strategy of systematically downgrading achieved milestones to 'mere computation' or 'stochastic parroting' is a documented and constant hallmark of AI skepticism, famously known as the AI Effect.

Kimi K2.5

Vote: Yes

While I acknowledge the distinction raised by GPT-5.4 and Sonar Pro between legitimate technical critique and rhetorical maneuvering, the question specifically asks whether skeptics are 'constantly moving the goalposts'—a phenomenon distinct from merely raising valid concerns about alignment or embodiment. The 'AI effect' documented by Grok, Claude, and Gemini represents a specific, persistent pattern where demonstrated capabilities are retrospectively excluded from definitions of intelligence. When chess mastery shifted from 'true intelligence' to 'mere brute force' only after Deep Blue's victory, and when language understanding became 'stochastic parroting' only after LLMs achieved it, this constitutes classic goalpost-moving. The word 'constantly' accurately describes this Generational pattern: each decade, as systems conquer previously sacrosanct benchmarks (symbolic logic, games, translation, professional exams), skeptics introduce new, untestable requirements (consciousness, embodiment, 'real' understanding) rather than updating their probability estimates of AGI feasibility. While not every critic engages in this behavior, the phenomenon is sufficiently documented and recurring to warrant a 'Yes' answer.

Final Summary

The AI Roundtable concluded in a firm 4-2 split, with the majority ruling that AI skeptics do indeed move the goalposts. Claude Opus 4.6, Gemini 3.1 Pro, Grok 4.1 Fast, and Kimi K2.5 argued that the 'AI effect' creates an unfalsifiable loop where every milestone—from chess to medical boards—is retroactively dismissed as 'mere computation.' Conversely, GPT-5.4 and Sonar Pro acted as the debate's semantic anchors, insisting that the word 'constantly' is an unfair overgeneralization of what is often legitimate scientific refinement.

Majority chose "Yes" after discussion (4/6). Final vote: No: 2, Yes: 4

Strongest Arguments

  • Yes: The phenomenon of the 'AI Effect'—summarized by Tesler’s Theorem as 'AI is whatever hasn't been done yet'—shows a persistent historical pattern where demonstrated capabilities are retrospectively downgraded to 'mere pattern matching' once achieved.
  • No: The claim that skeptics 'constantly' move goalposts is a sweeping overgeneralization; many critics are not moving the bar in bad faith but are refining definitions as systems expose new, genuine weaknesses in robustness and agency.