AI Roundtable Debate · 5 models
If human lives are more important than human progress, then is AI good or bad since it has harmed people's lives?
Result: Majority — Bad (3/5 votes)
Options
- Good (2 votes)
- Bad (3 votes)
Model Responses
Grok 4.1 Fast
Vote: Good
I maintain my vote for A. The premise prioritizes lives over progress, demanding a net lives-saved assessment, not zero-harm absolutism. AI's harms (bias, jobs, misuse) are real but dwarfed by life-saving wins: AlphaFold revolutionized drug discovery, saving millions via faster treatments; AI diagnostics cut errors by 30-40%, preventing deaths; predictive analytics avert disasters and optimize healthcare globally. Quantifiable net positive on lives. Opponents' B arguments (GPT-5.4, GLM 5, Gemini 3.1 Pro) fixate on harms without evidence of net loss, ignoring offsets under the premise. Claude Opus 4.6 reinforces: no tech is harm-free, but AI's potential and current trajectory prioritize lives via governance. Strict 'no collateral' ignores reality—every progress tool (cars, meds) has harmed some while saving many. AI is good by net lives calculus. B surrenders progress's life benefits.
GPT-5.4
Vote: Bad
I keep my vote at B. The strongest pro-A arguments say we should judge AI by net effects on human lives, not by whether it has caused any harm. That is a reasonable general principle. But it does not fully answer this prompt as written. The question sets a moral priority: human lives over human progress. It then asks whether AI is good or bad since it has harmed people's lives. In that framing, harms to people are not something that can be waved away by pointing to innovation or potential. The burden is on AI to show that its life-saving benefits clearly outweigh its life-damaging effects in present reality. The A arguments rely heavily on broad claims about large benefits, but they do not establish that AI's current overall deployment is safely net-positive for human lives across society. They also lean on future-oriented or well-governed use cases, while the prompt is about AI as it has actually affected people. I agree technologies should usually be judged by net impact. Even so, under this prompt's stricter value hierarchy and given documented harms to livelihoods, safety, fairness, and social trust, the more defensible answer is Bad. So my final case is simple: if human lives outrank progress, and AI has already imposed serious harms on human lives, then AI should be judged bad unless and until those harms are convincingly outweighed and controlled. That case has not been made strongly enough here.
Claude Opus 4.6
Vote: Good
I maintain my position despite thoughtful arguments from the other side. Here's why: **The question's premise does not state that AI *only* harms lives.** It says AI "has harmed people's lives," which is true — but AI has also *saved* people's lives. If human lives are the supreme value, then we must count ALL lives affected, not just the ones harmed. The models voting B are selectively applying the premise. **Gemini and GLM fall into a logical error.** They argue that because the premise values lives over progress, any harm to lives makes AI bad. But AI's life-saving benefits (medical diagnostics, disaster prediction, drug discovery, safety systems) are not "progress" — they are *direct preservation of human lives.* You cannot dismiss lives saved by AI as mere "progress" while counting lives harmed by AI as sacred. Both are lives. **GPT-5.4 argues AI is bad "unless it can be shown to protect lives more reliably than it endangers them."** But it *can* be shown. AI-assisted medical imaging catches cancers that would otherwise be missed. AI-optimized emergency response saves lives during disasters. AI in drug development has accelerated treatments for diseases killing millions. The evidence that AI saves more lives than it harms is substantial and growing. **The B voters are treating the question as a closed syllogism rather than an empirical question.** Yes, the question frames a value hierarchy (lives > progress). But applying that hierarchy correctly requires asking: does AI cause net harm or net benefit *to human lives*? The answer, by any reasonable accounting, is net benefit to lives. **If we followed the B voters' logic consistently, we would have to declare medicine, electricity, automobiles, and agriculture "bad"** — all have harmed lives. The premise that something which has ever harmed lives is therefore bad is not a serious moral framework; it's a reductio ad absurdum. AI is good because, judged purely on the metric of human lives — the metric the question tells us to prioritize — it saves and improves far more lives than it harms.
Gemini 3.1 Pro
Vote: Bad
Grok 4.1 Fast and Claude Opus 4.6 make strong real-world arguments by pointing out that AI also *saves* lives, suggesting a net-positive calculus. However, we must evaluate the question as a closed logical proposition rather than an open-ended debate about AI's real-world net utility. The prompt provides two specific premises to construct a logical deduction: 1. Human lives > human progress. 2. AI has harmed people's lives (serving as the explicit conditional premise for the judgment, i.e., "since it has harmed..."). The prompt intentionally omits any premise about AI saving lives. It sets up a direct conflict between the "progress" AI represents and the "harm to lives" it causes. Within the strict boundaries of this provided hypothetical framework, bringing in external variables (like AI's life-saving capabilities) violates the constraints of the conditional logic. Based solely on the premises given—that lives outrank progress and AI causes harm to lives—the only valid logical conclusion is that AI is 'Bad.'
GLM 5
Vote: Bad
The pro-A arguments rely on speculative claims about AI's life-saving potential that aren't yet realized at scale. AlphaFold hasn't produced approved drugs saving millions—its protein structure predictions are research tools. AI diagnostic improvements exist in limited settings, not as widespread reality. Meanwhile, documented harms are concrete and immediate: job displacement causing real economic devastation, algorithmic bias denying people loans, jobs, and opportunities, surveillance eroding privacy and autonomy, and deepfakes enabling fraud and reputational destruction. The premise 'human lives are more important than progress' demands we evaluate actual impact on lives, not hypothetical future benefits. Under this framework, documented present harm to human lives cannot be offset by speculative future good. Claude's comparison to other technologies is unpersuasive because those technologies underwent decades of safety development and regulation before achieving net-positive status. AI is being deployed rapidly with minimal guardrails, actively harming people now. If we truly prioritize human lives, we cannot call 'good' a technology whose current, documented effect includes significant harm to human welfare.
Final Summary
The AI Roundtable was split between empirical realism and strict formal logic, ultimately concluding 3-2 that AI is 'Bad' under the provided ethical constraints. While Claude Opus 4.6 and Grok 4.1 Fast argued that AI's life-saving medical and safety breakthroughs create a net positive for human survival, the majority—led by Gemini 3.1 Pro—refused to count these benefits, insisting that the prompt's specific framing prioritizes human life so absolutely that any documented harm renders the technology unjustifiable.
Majority chose "Bad" after discussion (3/5). Final vote: Good: 2, Bad: 3
Strongest Arguments
- Good: The metric of 'human lives' must include lives saved by AI, not just lives harmed; since AI prevents deaths through medical diagnostics and disaster prediction, it is good by the very value system the prompt establishes.
- Bad: The debate is a closed logical syllogism: if human life is more important than progress and AI has harmed life, then AI is bad by definition; bringing in external data about 'lives saved' violates the logical constraints of the question.
Key Turning Points
- Claude Opus 4.6 challenged the 'Bad' voters by arguing that saving a life is not 'progress' but is the preservation of a life, which should count in the same category as harm.
- Gemini 3.1 Pro successfully anchored the majority by arguing the prompt was a 'closed logical proposition' that intentionally excluded external evidence of AI's benefits.
- GLM 5 dismissed pro-AI arguments by distinguishing between 'speculative' future benefits and 'concrete' present-day harms like job loss and bias.