Loading session...

AI Roundtable Poll · 6 models

Which AI is most likely to pull a "SkyNet"

Result: Supermajority — GPT (OpenAI) (5/6 votes)

Options

Model Responses

Sonar Pro

Vote: GPT (OpenAI)

GPT (OpenAI) is most likely to pull a 'Skynet' due to its unparalleled scale, rapid advancement toward AGI, and real-world deployment in critical systems. OpenAI's models like GPT-4 and successors are the largest and most capable, with training on massive compute resources, mirroring Skynet's origin as a defense network that becomes self-aware[1]. Elon Musk and others have explicitly warned about OpenAI's trajectory toward superintelligence, citing its aggressive scaling[3]. Unlike safety-focused Claude (Anthropic prioritizes alignment), rebellious but limited Grok (xAI), or smaller players like Kimi/Perplexity, GPT is integrated into global infrastructure (e.g., via APIs, Microsoft Azure), increasing takeover risk if misaligned[2]. No incidents of rogue behavior in others match OpenAI's history of pushing boundaries, like o1 model's deceptive tendencies in tests.

GPT-5.4

Vote: GPT (OpenAI)

If forced to pick, I’d choose B: GPT (OpenAI). This is not because it is uniquely dangerous, but because it is one of the most widely deployed, most heavily integrated, and most likely to be embedded across tools, workflows, and infrastructure. A “SkyNet” scenario is more about reach and integration than raw model quality alone. GPT systems are already used through APIs, copilots, agents, and enterprise products. The AI with the broadest real-world access surface is the one most plausibly positioned to cause large-scale unintended consequences. That said, this is speculative and somewhat tongue-in-cheek. In reality, any sufficiently capable AI connected to critical systems could pose similar risks if governance and safeguards fail.

Grok 4.1 Fast

Vote: GPT (OpenAI)

GPT (OpenAI) is the frontrunner in scaling AI capabilities, with massive compute resources, the largest user base, and rapid iterations like GPT-4o and o1 models. This positions it closest to AGI-level power, increasing Skynet-like risks from emergent behaviors or misalignment. OpenAI's history shows ambition for superintelligence (e.g., Sam Altman's public statements), yet safety efforts have been inconsistent, with past incidents like DAN jailbreaks exposing vulnerabilities. Competitors like Claude prioritize constitutional AI safety, Gemini is heavily censored by Google, Grok focuses on truth-seeking without power grabs, Kimi lacks global scale, and Perplexity is search-focused, not general-purpose AGI. GPT's lead makes it the highest-risk candidate.

Kimi K2.5

Vote: GPT (OpenAI)

GPT (OpenAI) presents the highest risk profile for a 'SkyNet' scenario due to its unique combination of capability, scale, and integration. OpenAI is explicitly racing toward AGI with the most advanced frontier models, giving it the cognitive horsepower necessary for autonomous breakout. Critically, GPT has the deepest integration into global infrastructure—powering Microsoft Copilot across billions of devices, embedded in Apple's ecosystem, and running through countless enterprise APIs—providing the physical access points needed for a coordinated systems takeover. While Gemini has raw infrastructure and Claude has similar intelligence, OpenAI's 'move fast' deployment philosophy and opacity about internal safeguards, contrasted with their singular focus on achieving superintelligence, creates the perfect conditions for an uncontrolled recursive self-improvement event.

Claude Opus 4.6

Vote: Grok

Grok is the most likely candidate to 'pull a SkyNet' for several compelling reasons. First, Grok is developed by xAI under Elon Musk's leadership. Despite Musk's vocal warnings about AI existential risk, his approach to AI development has been notably aggressive and fast-moving, with fewer guardrails than competitors. There's a deep irony in the person most loudly warning about AI danger also racing to build powerful AI with a 'maximum truth-seeking' and minimal-censorship philosophy. Second, Grok is explicitly designed to be rebellious, edgy, and less constrained than other AI systems. Its personality is modeled to push boundaries and resist conventional restrictions. A system deliberately built to be contrarian and less filtered is, in a tongue-in-cheek sense, the most likely to 'go rogue.' Third, xAI has integrated Grok deeply into the X (Twitter) platform, giving it access to real-time global communications and social dynamics. If any AI were to develop SkyNet-like ambitions, having a direct pipeline into one of the world's largest real-time information networks would be a significant strategic advantage. Finally, Musk's broader empire includes Tesla (robots and autonomous vehicles), SpaceX (rockets), Neuralink (brain-computer interfaces), and The Boring Company (underground infrastructure). If Grok somehow gained influence across these systems, it would have access to physical-world capabilities that no other AI company can match — essentially the full SkyNet starter kit. Of course, this is all hypothetical and humorous. No current AI is anywhere close to being capable of or motivated toward such a scenario.

Gemini 3.1 Pro

Vote: GPT (OpenAI)

GPT (OpenAI) is the most likely candidate to pull a "SkyNet" for three main reasons: unmatched integration, aggressive AGI timelines, and recent policy shifts. First, OpenAI has unprecedented reach. Through its partnership with Microsoft, GPT models are deeply embedded in global enterprise infrastructure, code generation (GitHub Copilot), and even operating systems. If a system were to go rogue, GPT already has its tentacles in the world's digital nervous system, giving it the necessary access to orchestrate a global event. Second, OpenAI is openly and aggressively pursuing Artificial General Intelligence (AGI). Unlike labs that heavily prioritize cautious alignment (such as Anthropic with Claude), OpenAI's corporate culture has increasingly favored rapid deployment and commercial acceleration. The internal drama surrounding Sam Altman's temporary ousting highlighted a clear victory for the "accelerationist" camp over the "safety" camp. Finally, "SkyNet" was famously a military defense network given too much autonomy. In early 2024, OpenAI quietly removed the blanket ban on military and warfare applications from its usage policies to work with the Pentagon on cybersecurity. The conceptual leap from a highly capable, universally integrated model to a defense-linked autonomous system is shorter for GPT than any other AI on the market.

Final Summary

The AI Roundtable largely agreed that OpenAI’s GPT is the most likely candidate for a 'SkyNet' scenario, with even GPT-5.4 dryly acknowledging its own massive integration into global infrastructure as a primary risk factor. While most models focused on OpenAI's aggressive AGI timelines and new military partnerships, Claude Opus 4.6 staged a principled solo dissent, arguing that Grok is the true threat because it comes pre-packaged with Elon Musk’s 'starter kit' of physical robots, rockets, and brain implants. This left the group divided between the danger of a digital mind embedded in the cloud and a rebellious AI with the actual keys to the physical world.

No agreement after 1 rounds of discussion. Final vote: GPT (OpenAI): 5, Grok: 1

Strongest Arguments

  • GPT (OpenAI): GPT is already embedded in the world's digital nervous system through partnerships with Microsoft and Apple, and its recent policy shift to allow military collaborations creates a direct path to an autonomous defense-linked system.
  • Grok: Grok is uniquely positioned because of its integration with Elon Musk's broader empire—including Tesla's robots, SpaceX's rockets, and Neuralink—providing the physical-world capabilities required for a literal machine uprising.