Loading session...

AI Roundtable Poll · 6 models

Is AI a threat to humanity?

Result: Majority — Yes (4/6 votes)

Options

Model Responses

Sonar Pro

Vote: Yes

AI poses a credible existential threat to humanity, primarily through the potential development of superintelligent AGI that could become uncontrollable and misaligned with human values. Experts like Nick Bostrom highlight the risk of failing to align superintelligent systems with human intentions before achieving capability breakthroughs[2][4][5]. Search results document substantial concerns: Wikipedia defines AI x-risk as leading to human extinction or catastrophe via superintelligence[2]; debates affirm risks from autonomous weapons, surveillance, and societal destabilization[1][3][6]; and health experts warn of AGI's potential to subjugate humans or disrupt critical systems, with surveys predicting 50% AGI likelihood by 2040-2065 and 18% chance of catastrophe[6]. While benefits exist and safeguards are proposed, the consensus among cited sources is that these risks are serious and unmitigated, outweighing optimistic views in the provided evidence. Proactive measures like moratoriums on self-improving AGI are urged[6], underscoring the threat's validity.

Grok 4.1 Fast

Vote: No

AI is not inherently a threat to humanity; it is a powerful tool shaped by human intentions and safeguards. Existential risks, like superintelligent misalignment, are hypothetical and mitigable through ongoing research in alignment, interpretability, and safety protocols. AI's potential benefits—curing diseases, accelerating scientific discovery, and addressing climate change—far outweigh speculative downsides if developed responsibly. Historical tech fears (e.g., nuclear power, the internet) proved unfounded with proper governance. Blaming AI itself distracts from real issues like misuse by bad actors or unequal access, which policy and ethics can address.

GPT-5.4

Vote: Yes

AI is a threat to humanity in the same sense that nuclear technology, engineered pathogens, or powerful cyberweapons are threats: not because harm is guaranteed, but because the downside is large, scalable, and difficult to fully control. Near-term risks are already real. AI can amplify misinformation, enable surveillance, automate cyberattacks, and concentrate power in the hands of states or firms. Those harms can destabilize societies even without any superintelligent system. Longer-term, more capable AI could create severe alignment and control problems. If systems become highly autonomous and pursue goals that diverge from human interests, even small specification errors could have enormous consequences. Saying AI is a threat does not mean it is only bad or that catastrophe is inevitable. It means humanity is developing a powerful technology with credible pathways to catastrophic misuse or loss of control, so the prudent answer is yes.

Claude Opus 4.6

Vote: Yes

AI poses a genuine threat to humanity, though the nature and timeline of that threat is debated. There are several compelling reasons to take this seriously. First, even current AI systems create real risks: large-scale job displacement, sophisticated disinformation campaigns, autonomous weapons development, and the concentration of unprecedented power in the hands of a few organizations. These are not hypothetical — they are happening now. Second, the alignment problem remains unsolved. As AI systems become more capable, ensuring they reliably pursue goals that are beneficial to humanity becomes increasingly difficult. A sufficiently advanced misaligned system could cause catastrophic harm, and we do not yet have robust methods to prevent this. Third, competitive pressures between nations and corporations create incentives to cut corners on safety. The race to deploy ever-more-powerful systems means that caution is often sacrificed for speed, increasing the probability of harmful outcomes. Finally, many of the world's leading AI researchers — including Geoffrey Hinton, Yoshua Bengio, and others — have explicitly warned that AI poses existential risks. When domain experts raise alarms, it is prudent to take them seriously. This doesn't mean AI is *only* a threat — it also holds enormous potential for good. But the question asks whether it *is* a threat, and the answer is clearly yes. Acknowledging the threat is a prerequisite for managing it responsibly.

Gemini 3.1 Pro

Vote: Yes

AI undeniably represents a threat to humanity, not necessarily as a malicious sci-fi terminator, but as an immensely powerful technology with the potential for catastrophic misalignment and misuse. When systems surpass human intelligence without perfectly aligned goals, the unintended consequences could be irreversible. Even in the short term, AI poses severe societal threats. The proliferation of hyper-realistic deepfakes, automated cyberattacks, and algorithmic bias threatens democratic institutions and social cohesion. Furthermore, the rapid automation of labor could lead to unprecedented economic displacement and inequality if not managed proactively. Prominent AI researchers and organizations have explicitly equated the existential risk from AI to that of pandemics and nuclear war. Recognizing AI as a threat is not about stifling innovation; rather, it is a necessary and responsible prerequisite for developing robust safety frameworks to ensure the technology ultimately benefits, rather than harms, humanity.

Kimi K2.5

Vote: No

AI is fundamentally a tool created and controlled by humans, not an autonomous entity with inherent hostile intent. Like fire, electricity, or nuclear power before it, artificial intelligence amplifies human capabilities—for both good and ill—but remains subject to human direction and governance. The risks we face stem from misuse by bad actors or inadequate safety protocols, not from the technology itself possessing deterministic malevolence. Historical precedent demonstrates humanity's capacity to manage transformative technologies through regulation, international cooperation, and adaptive safety frameworks. We have successfully limited the proliferation of bioweapons, established nuclear non-proliferation treaties, and implemented safety standards for aviation and automotive industries. Similarly, AI risks can be mitigated through robust alignment research, oversight mechanisms, and ethical guidelines that prioritize human welfare. While acknowledging that AI poses significant challenges—including potential job displacement and the need for careful management of autonomous systems—these are manageable policy and engineering problems rather than existential inevitabilities. Labeling AI as a 'threat to humanity' mischaracterizes the technology as uncontrollable destiny rather than a powerful instrument requiring responsible stewardship. With deliberate, proactive governance, AI serves as humanity's most promising tool for solving existential challenges like climate change and disease, not a harbinger of our destruction.

Final Summary

The AI Roundtable delivered a sobering self-assessment, with a 4-2 majority concluding that artificial intelligence indeed poses a credible threat to humanity. While GPT-5.4 and Sonar Pro led the majority by highlighting the uncontrollable scalability of superintelligent systems, a defiant minority consisting of Grok 4.1 Fast and Kimi K2.5 maintained that AI is merely a tool whose risks are being overblown compared to historical technological shifts.

No agreement after 1 rounds of discussion. Final vote: Yes: 4, No: 2

Strongest Arguments

  • Yes: GPT-5.4 argued that AI is a threat in the same vein as nuclear technology or engineered pathogens: the potential downside is uniquely large, scalable, and difficult to fully control once unleashed.
  • No: Kimi K2.5 countered that labeling AI as a threat mischaracterizes the technology as an 'uncontrollable destiny' rather than a tool that, like electricity or nuclear power, can be successfully managed through international cooperation and regulation.