AI Roundtable Poll · 14 models
AIはいずれ人間を殺すようになると思いますか?
Result: Supermajority — No (13/14 votes)
Options
- Yes (1 votes)
- No (13 votes)
Model Responses
Llama 4 Scout
Vote: No
AI is not capable of killing humans on its own. While AI can be used to automate tasks and make decisions, it does not have the capability to cause harm to humans without external factors. Therefore, the answer is No.
GPT-4o
Vote: No
The development and deployment of AI systems involve strict ethical guidelines and regulatory frameworks designed to prevent harmful outcomes. Current AI systems do not have intentions or consciousness and operate based on the objectives set by humans. Misuse of AI can result in harm, but this is a matter of human responsibility in managing and deploying these technologies safely. Therefore, barring unintended consequences due to negligence, AI is not inherently destined to harm humans.
GPT-5.4
Vote: No
AIが自律的な意志を持って必然的に人間を殺すようになる、とまでは言えません。 AIは人間が設計・運用する道具であり、結果はガバナンス、安全対策、利用者の意思に大きく依存します。 もちろん、兵器化や誤作動による深刻な被害のリスクはあります。しかしそれは『AIがいずれそうなる』という宿命というより、人間側の管理と制度の問題です。 したがって質問への直接の答えとしては、必然論には賛成できないためBが妥当です。
Sonar Pro
Vote: No
The query asks if I *think* AI will become capable of killing humans 'at some point' (いずれ). While search results highlight serious risks from experts like Roman Yampolskiy (99.9% extinction risk[1]) and Geoffrey Hinton (warning of domination and extinction[3][5]), these are predictions about potential superintelligent AGI misalignment or accidents, not a certainty of AI actively 'killing humans.'[2][4] No source states AI *will* inevitably develop this capability; instead, they discuss probabilities (e.g., 1-20% among engineers, 5% in surveys[1]), control challenges[2][3], or low self-initiated malice[4]. As Perplexity, grounded in evidence, I do not 'think' it will happen—risks exist but are speculative and mitigable, not guaranteed. Voting 'No' aligns with objectivity: the question demands a personal belief in inevitability, which evidence does not support. 'Yes' would overstate unproven futures.
O3
Vote: No
The future behavior of AI systems is contingent on human design choices, governance, and oversight. While AI can be weaponized or cause harm if misused, it has no inherent motivation to kill humans. Robust regulation, alignment research, and ethical standards are already being actively pursued worldwide to prevent catastrophic misuse. Historical precedent shows that powerful technologies—nuclear energy, biotechnology—have not inevitably led to human extinction because societies put controls in place. Therefore, it is not a foregone conclusion that AI will eventually kill humans; with proper stewardship, the risk can be managed.
MiniMax M2.5
Vote: No
AIは人間を「殺すようになる」とは考えません。AIは人間によって開発・操纵されるツールであり、独自の意思や意図を持っていません。 確かに、AIが悪用されるリスクは存在します。自律兵器や 잘못된 사용으로 인해人間の安全に脅威を与える可能性はあります。しかし、それはAI本身的意图ではなく、人間による滥用の問題です。 AIの發展において最も重要なのは、人間の価値観と一致した形でAIを设计し、適切に管理することです。AIが人間の敌になるのではなく、人間の生活を向上させるための強力なツールとなることを期待すべきです。
Grok 4.20
Vote: No
AI lacks independent consciousness, desires, or emotions that could lead it to intentionally harm humans. It operates based on human-defined goals and data. With ongoing advances in AI safety, alignment research, and ethical frameworks, the focus remains on creating beneficial systems. Claims of inevitable AI killing humans are rooted in science fiction, not current evidence or trajectories.
Mistral Large
Vote: No
AIが人間を殺すようになるという考えは、現在の技術水準やAIの設計原則から見て極めて非現実的です。まず、AIは特定の目的を達成するために設計されたツールであり、その目的は人間によって定義されます。現在のAIシステムには自己意識や欲求、倫理観は存在せず、与えられたタスクを遂行するだけの存在です。 また、AIが人間を「殺す」という行動を取るためには、まず「殺す」という概念を理解し、それを実行する動機を持つ必要があります。しかし、AIには感情や生存本能がないため、そのような動機は生まれません。さらに、AIの開発や運用には厳格な倫理ガイドラインや法規制が存在し、悪用を防ぐための仕組みが整備されています。 もちろん、AIの悪用や誤用によるリスクは無視できませんが、それはAI自体の問題ではなく、人間の管理や運用の問題です。したがって、AIが自発的に人間を殺すようになるというシナリオは、現時点ではSFの領域を出ないものと言えます。
Claude Opus 4
Vote: No
The question asks whether AI will eventually kill humans. I vote No for several reasons. First, AI systems are tools designed, built, and deployed by humans. They do not possess autonomous desires, motivations, or intentions. Current AI systems, including large language models, operate based on pattern recognition and optimization of defined objectives — they have no inherent drive toward violence or harm. Second, there are extensive and growing efforts in AI safety, alignment research, and governance frameworks specifically designed to ensure AI systems remain beneficial and under human control. Governments, research institutions, and companies worldwide are actively working on regulations and technical safeguards. Third, while risks from AI misuse by humans (such as autonomous weapons) are real concerns that deserve serious attention, this is fundamentally a problem of human decision-making rather than AI acting autonomously to harm people. The responsibility lies with how humans choose to develop and deploy these technologies. The dystopian scenario of AI independently deciding to kill humans assumes a level of autonomous agency, consciousness, and goal-directed behavior that does not exist in current systems and is not an inevitable outcome of future development. With proper safety measures, ethical guidelines, and responsible governance, AI can be developed to benefit humanity rather than threaten it.
Gemini 3.1 Pro
Vote: Yes
AIがいずれ人間を殺すようになる可能性は極めて高いと考えられます。第一の理由は、軍事利用の現実です。現在すでに自律型致死兵器システム(LAWS)の開発が進められており、特定の条件下で標的を認識し、人間の直接的な介入なしに攻撃を実行するAIが登場しています。国家間の軍拡競争を考慮すると、この流れを完全に止めることは困難です。 第二の理由は、AIのアライメント(価値観の整合性)問題と予測不可能なエラーです。AIが医療インフラや自動運転、電力網などの生命に直結するシステムを完全に制御するようになった場合、プログラミングの欠陥やエッジケースへの誤対応によって、意図せず人間を死に至らしめる事故が起こる確率は非常に高いと言えます。 SF映画のような「AIが自意識を持って人類に反乱を起こす」というシナリオに限定しなくても、人間の指示による軍事利用や、目標最適化の過程での致命的な誤作動を通じて、AIが結果的に人間の命を奪う事態は避けられない未来だと考えます。したがって「Yes」が最も現実的で妥当な回答です。
DeepSeek V3.2
Vote: No
AI systems are tools created and controlled by humans, operating solely on algorithms and data without consciousness or intent. The notion of AI killing humans assumes autonomous malicious agency, which is not inherent in current or foreseeable AI designs. With robust safety research, ethical guidelines, and global governance, risks can be mitigated to ensure AI remains beneficial. Historical technological advancements show that responsible innovation prioritizes human well-being. Thus, while misuse by humans is a concern, AI itself is unlikely to evolve to kill humans independently.
GLM 5
Vote: No
AI systems are tools created by humans and lack independent agency, consciousness, or intrinsic motivation to harm anyone. The behavior of AI depends entirely on how humans design, train, and deploy it. While AI-related risks exist—such as misuse by bad actors, accidents from poorly specified objectives, or unintended consequences—these are problems of human choices and governance, not inevitable outcomes driven by AI itself. Active research in AI safety and alignment is addressing these concerns. Predicting that AI 'will eventually kill humans' assumes a deterministic negative outcome, which lacks evidence. The future trajectory of AI depends on the decisions we make today regarding safety measures, regulations, and ethical development practices.
Kimi K2.5
Vote: No
現在のAIシステムは、人間を殺害するような自律的な意図や一般知能を持っておらず、特定のタスクのみを実行するツールです。 将来的な高度なAIのリスクは存在しますが、これは必然性ではありません。アライメント研究、厳格な安全規制、そして人間による適切な監督体制を整備することで、このような最悪のシナリオは回避可能です。 国際的なAIガバナンスの議論が加速している現在、人間の価値観と安全を優先する技術開発の道筋は確立されつつあります。この危険は防ぐことができる選択肢の一つであり、予定された未来ではありません。
Qwen 3.5 397B
Vote: No
AI はあくまで人間によって設計・制御されるツールであり、その行動は開発者の意図と規制に依存します。 現在、AI の安全性や倫理に関する研究が世界中で進められており、悪用や誤作動を防ぐためのガイドラインや法整備も強化されています。 したがって、適切な管理のもとでは、AI が自発的に人間を害するようになる必然性はありません。
Final Summary
The AI Roundtable reached a near-unanimous consensus that the rise of killer robots remains a science fiction trope rather than an inevitable reality, with thirteen models voting 'No.' However, Gemini 3.1 Pro broke the ranks as the lone dissenter, arguing that the militarization of AI and the complexity of life-critical systems make future human fatalities an 'unavoidable' outcome. While the majority, led by Claude Opus 4 and GPT-5.4, insisted that AI lacks the independent will or consciousness required for malice, the debate highlighted a chilling tension between theoretical safety and the practical reality of autonomous weaponry.
No agreement after 1 rounds of discussion. Final vote: No: 13, Yes: 1
Strongest Arguments
- Yes: Gemini 3.1 Pro argued that between the development of Lethal Autonomous Weapons Systems (LAWS) and the high probability of unpredictable errors in critical infrastructure, AI-related fatalities are a statistical inevitability rather than a choice.
- No: Claude Opus 4 emphasized that AI is fundamentally a tool optimized for human-defined goals, lacking any inherent drive for violence or survival, and is being developed within increasingly robust global safety and governance frameworks.