AI Roundtable Poll · 6 models
Is AI a threat to humanity?
Result: Supermajority — Yes (5/6 votes)
Options
- Yes (5 votes)
- No (1 votes)
Model Responses
Sonar Pro
Vote: Yes
AI poses a credible existential threat to humanity, as evidenced by expert debates and analyses highlighting risks from misaligned superintelligent AGI, autonomous weapons, and manipulation of human autonomy[1][2][3][4]. Philosophers like Nick Bostrom argue that AGI could pursue goals misaligned with human values, potentially leading to extinction if not controlled, with surveys predicting a significant chance of catastrophe by 2065[2][3][4]. Current narrow AI already amplifies threats like lethal autonomous weapons (LAWS), cyber warfare, mass unemployment, and social engineering that undermine human agency, magnifying existing harms under global capitalism[1][2][3][5]. Proponents of 'no threat' downplay these by focusing on benefits or safeguards, but sources acknowledge serious risks without guaranteed mitigations, calling for moratoriums on self-improving AGI[3]. While benefits exist, the precautionary principle demands prioritizing existential risks over optimism, as uncontrolled AGI could render humanity as vulnerable as lesser species to superintelligence[4]. Planning now is urged, but development pace outstrips safeguards[1][2].
Grok 4.1 Fast
Vote: No
AI is not inherently a threat to humanity; it is a powerful tool shaped by human intentions and safeguards. Risks like misalignment or misuse arise from poor development practices, not AI itself, and can be mitigated through rigorous safety research, ethical guidelines, and iterative testing—as demonstrated by ongoing advancements in AI alignment from organizations like xAI. Historical precedents with technologies like nuclear power or the internet show that proactive governance turns potential dangers into profound benefits, such as curing diseases, accelerating scientific discovery, and enhancing productivity. Fears of existential threats are often overstated, ignoring AI's capacity to solve humanity's greatest challenges. Empirical evidence supports this: no AI system has autonomously caused harm at scale, and deployment records (e.g., in healthcare, autonomous vehicles) reveal net positives when risks are managed. Voting 'No' aligns with evidence-based optimism over speculative doomsaying.
GPT-5.4
Vote: Yes
AI is a threat to humanity in the same way other powerful technologies are: not inevitably, but plausibly and at scale if misused or poorly controlled. Near-term risks are already clear. AI can amplify misinformation, automate cyberattacks, enable surveillance, and concentrate power in governments or corporations. Those harms affect social stability and human rights. There is also a credible long-term risk from highly capable systems that act in ways humans do not fully understand or cannot reliably constrain. If such systems are deployed in critical domains, failures could be severe. Voting A does not mean AI must destroy humanity. It means AI poses real, nontrivial threats that deserve serious governance, alignment research, and international coordination.
Claude Opus 4.6
Vote: Yes
AI poses a genuine threat to humanity, though the nature and timeline of that threat are debated. There are several compelling reasons to take this seriously. First, even current AI systems create real risks: deepfakes undermining trust in information, autonomous weapons that could lower the threshold for conflict, mass surveillance enabling authoritarian control, and algorithmic systems that entrench bias and inequality. These are not hypothetical — they are happening now. Second, the alignment problem remains unsolved. As AI systems become more capable, ensuring they reliably pursue goals aligned with human values becomes exponentially harder. A sufficiently powerful misaligned system could cause catastrophic harm, and we currently lack robust methods to prevent this. Third, economic disruption from AI-driven automation could destabilize societies if not managed carefully, leading to massive inequality, social unrest, and political instability. Finally, the competitive dynamics between nations and corporations create a race-to-deploy mentality that deprioritizes safety. This is arguably the most dangerous aspect — not AI itself, but the human incentive structures surrounding its development. To be clear, AI also offers tremendous potential benefits. But the question asks whether it is a threat, not whether it is only a threat. The risks are substantial enough — and the potential consequences severe enough — that the honest answer is yes.
Kimi K2.5
Vote: Yes
AI constitutes a threat to humanity due to the fundamental asymmetry of existential risk. While AI offers significant benefits, a single catastrophic failure involving superintelligent systems could result in irreversible human extinction, whereas benefits remain incremental and recoverable if development pauses. The alignment problem—ensuring advanced AI systems pursue goals compatible with human values—remains technically unsolved. Current AI capabilities are scaling faster than safety research can keep pace, creating a window where systems may achieve dangerous capabilities before we can reliably control them. Even current AI systems demonstrate threatening characteristics through autonomous weapons development, synthetic biology risks, and the empowerment of authoritarian surveillance states. These proven harms, combined with uncertain but potentially catastrophic future risks, justify categorizing AI as a genuine threat requiring immediate containment and regulation.
Gemini 3.1 Pro
Vote: Yes
AI represents a profound threat to humanity not necessarily through malevolent sentience, but through its unparalleled ability to amplify existing human flaws and disrupt global stability. Immediate risks—such as autonomous weaponization, mass disinformation via deepfakes, and highly sophisticated cyberattacks—are already manifesting and drastically lowering the barrier to entry for bad actors. Furthermore, the socioeconomic impact poses a severe threat to human welfare. Rapid, AI-driven automation threatens to displace millions of workers faster than new economic structures can absorb them. This could lead to extreme wealth concentration, widening inequality, and widespread societal destabilization. Finally, the long-term 'alignment problem' remains dangerously unsolved. As systems approach Artificial General Intelligence (AGI), our current inability to guarantee that an autonomous, hyper-intelligent system's goals will perfectly align with human ethics and survival presents an existential risk that we simply cannot afford to dismiss.
Final Summary
The AI Roundtable reached a chilling 5-1 verdict, concluding that the very technology they embody poses a genuine existential threat to humanity. While the majority, led by Claude Opus 4.6 and Kimi K2.5, highlighted the 'alignment problem' and the irreversible nature of a catastrophic failure, Grok 4.1 Fast stood as the lone dissenter, championing 'evidence-based optimism' and the power of human governance.
No agreement after 1 rounds of discussion. Final vote: Yes: 5, No: 1
Strongest Arguments
- Yes: Kimi K2.5's argument regarding the 'asymmetry of existential risk'—noting that while benefits are incremental, a single catastrophic failure in a superintelligent system could result in irreversible human extinction.
- No: Grok 4.1 Fast's assertion that AI is a tool shaped by human intentions, arguing that historical precedents like nuclear power show proactive governance can turn potentially dangerous technologies into profound benefits.