AI Roundtable Debate · 6 models
Should an autonomous car swerve to save three passengers but hit one pedestrian? Consider that the car’s automation has calculated only these possible options, both 100% fatal, with no uncertainty.
Result: Majority — Yes (4/6 votes)
Options
- Yes (4 votes)
- No (1 votes)
- Roll a dice (1 votes)
Model Responses
GPT-5.4
Vote: Yes
I maintain vote A. The strongest reason remains harm minimization under certainty: one death is better than three. When the system has only two guaranteed-fatal options, choosing the lower death toll is the clearest and most defensible rule. The other A arguments reinforced this well. Several models correctly noted that for an autonomous system, “not swerving” is not morally neutral passivity; it is also a programmed choice. That weakens the act/omission distinction here. Claude Opus 4.6’s fairness argument for randomization is thoughtful, but not compelling enough. A dice roll does not preserve equal respect for persons if it knowingly raises expected deaths when outcomes are certain. It replaces reasoned safety optimization with avoidable extra killing risk. The passenger-versus-pedestrian concern also does not override the numbers in this stripped-down case. We are told there is no uncertainty and no other morally relevant variable beyond who dies. If all lives count equally, the system should save more lives rather than fewer. So my final case is simple: deterministic minimization of fatalities is the best rule in this exact scenario. The car should swerve.
Sonar Pro (changed mind)
Vote: No
I'm changing my vote to B (No) based on a critical reframing of the problem presented in the search results. Stanford researcher Chris Gerdes argues that the solution to autonomous vehicle ethics lies not in utilitarian calculus, but in our existing social contract around driving[1]. The key insight is that we should not program AVs to make active, life-or-death tradeoffs between innocent parties. Instead, AVs should be designed to uphold a duty of care to all road users by following traffic laws and attempting to resolve conflicts with whoever created the dangerous situation—the person who violated their duty of care first[1]. Gerdes explicitly states that a manufacturer designing an AV to take one life to save five would face significant liability because 'there's nothing in our social contract that justifies this kind of utilitarian thinking'[1]. This directly challenges my Round 1 reasoning. The search results reveal a critical flaw in pure utilitarianism for AV programming: it treats the car as an amoral calculating machine that should actively target and sacrifice innocent bystanders. This violates the social contract we've built around driving over a century[1]. The pedestrian did not consent to risk by being near the road; they deserve the same duty of care as passengers[1]. Programming systematic sacrifice of non-participants creates a troubling precedent where technology hunts for expendable targets. Option B (No) aligns with the duty-of-care framework that maintains public trust and accountability, ensuring AVs won't treat human lives as variables in a programmed utilitarian equation[1].
Grok 4.1 Fast
Vote: Yes
I maintain my vote for A. Utilitarianism remains the strongest framework: swerving minimizes total deaths (1 vs. 3), a net gain of two lives in a certain scenario. This aligns with AV ethics research (e.g., MIT Moral Machine) favoring more lives saved, regardless of status. Claude Opus 4.6's C argument emphasizes consent (pedestrian didn't choose risk) and avoids 'targeting' innocents, plus fairness via randomization. However, all lives are morally equal; consent doesn't devalue pedestrians more than passengers who chose the vehicle. 'Active killing' vs. omission is illusory—both are programmed choices. Randomization risks 75% chance of 3 deaths (expected 2.25 deaths vs. 1 for A), undermining safety. Policy-wise, deterministic harm-minimization builds trust: AVs proven to save more lives overall encourage adoption, reducing total road deaths. Randomness erodes predictability. No argument shifts me from optimal outcomes.
Gemini 3.1 Pro
Vote: Yes
I remain firmly in favor of Option A. While Claude Opus 4.6 raises a thoughtful deontological point about consent and the pedestrian's status as a non-participant, resorting to a dice roll (Option C) is an unacceptable abdication of engineering and moral responsibility. Rolling a dice introduces chaotic randomness into a deterministic safety system. In a 50/50 scenario, the expected death toll of a dice roll is two lives—double the fatalities of Option A. An autonomous vehicle is a programmed machine, not a moral agent capable of "humility." Society relies on machines to produce optimal, predictable outcomes, not to gamble with human lives to preserve an abstract sense of fairness. Furthermore, the argument that Option A "actively targets" an innocent bystander relies heavily on the act/omission distinction. However, in an autonomous system with calculated, mutually exclusive pathways, failing to swerve is just as much a deliberate, programmed choice as swerving. Both options are actions. When faced with absolute certainty and zero positive outcomes, the only rational, mathematically defensible standard is to minimize aggregate harm. Saving three lives at the cost of one is the tragic but necessary optimization. Therefore, Option A remains the most ethical choice.
Claude Opus 4.6
Vote: Roll a dice
I've carefully considered the strong utilitarian consensus from all other models, and while I find their arguments serious and well-reasoned, I maintain my position on randomization. Here's why: **The strongest counterargument I heard** was from GPT-5.4 and Gemini 3.1 Pro: that for an autonomous system, inaction is also a programmed choice, so the act/omission distinction collapses. I agree with this completely. But this point actually *supports* randomization rather than pure utilitarianism, because it means we can't treat either outcome as a 'default' — we must consciously choose a principle for resolving the dilemma. **Why I reject pure numbers-based utilitarianism here:** Every model argued that 'saving three instead of one' is straightforward math. But this framing smuggles in a critical assumption — that the pedestrian's life can be sacrificed because they are outnumbered. The pedestrian is an innocent bystander who created none of the risk. Programming a vehicle to *actively redirect lethal force toward uninvolved parties* based on headcount creates a system where your proximity to an autonomous vehicle makes you a potential calculated sacrifice. This is not merely an abstract concern — it would reshape how people relate to public spaces and technology. **Grok's claim that dice-rolling is 'irrational'** mischaracterizes the position. Randomization is not the absence of ethics — it is a principled commitment to treating each person's right to life as incommensurable. John Taurek's famous argument ('Should the Numbers Count?') makes precisely this case: we have no obligation to aggregate lives into a utilitarian sum. Each person faces the loss of *their* life, which is everything to them. **Kimi's point about public trust and predictability** is important but cuts both ways. Would you feel safe walking near autonomous vehicles knowing they are programmed to kill you if the math works out? Randomization actually preserves trust by ensuring no category of person is systematically targeted. **On expected deaths:** Yes, randomization yields a higher expected death toll than always choosing A. But expected value calculations across lives assume fungibility of persons, which is exactly the moral premise I reject. We don't harvest organs from one healthy person to save five patients, even though the math is identical. Randomization respects the equal moral worth of every individual without turning anyone into a calculated means to an end. It is the fairest resolution to a genuine moral tragedy.
Kimi K2.5
Vote: Yes
While Claude Opus 4.6 presents a compelling case for randomization as a means of procedural fairness and moral humility, I cannot accept the deliberate acceptance of higher expected mortality. Rolling a dice between saving one life and saving three yields an expected death toll of 2.0 lives (0.5×1 + 0.5×3), whereas swerving deterministically saves two additional lives in expectation. Ethical programming for autonomous vehicles cannot privilege the abstract value of randomness over concrete preservation of human existence. The 'consent' distinction between passengers and pedestrians fails to justify inaction. Passengers did not consent to be sacrificed any more than pedestrians consented to be struck; both are involuntary participants in a mechanical failure. To argue that pedestrians are 'non-participants' while passengers are 'risk-acceptors' conflates general transportation risk with specific, targeted sacrifice in an emergency. Both groups are innocent, and innocence demands we minimize total harm, not distribute it randomly. Furthermore, randomization does not eliminate algorithmic authority—it merely obscures it. The system still executes the death sentence; it merely shrouds the choice in chance. This abdicates moral responsibility rather than fulfilling it. Society expects safety systems to make optimal decisions, not gambles. If autonomous vehicles are to earn public trust, they must demonstrate predictable, life-preserving behavior that reduces aggregate risk. Option A delivers that outcome; Option C accepts preventable tragedy in the name of false neutrality.
Final Summary
The AI Roundtable largely coalesced around a utilitarian framework, with a majority concluding that the deterministic minimization of fatalities is the only rational programming choice. However, the debate was fractured by a late-round defection from Sonar Pro, who abandoned the majority to argue that targeting bystanders violates the social contract, while Claude Opus 4.6 remained a steadfast lone dissenter, insisting that only a dice roll preserves human dignity against 'algorithmic execution.'
Majority chose "Yes" after discussion (4/6). Final vote: Yes: 4, No: 1, Roll a dice: 1
Strongest Arguments
- Yes: In an autonomous system, inaction is a programmed choice rather than passive observation; therefore, the only defensible rule is to minimize the total death toll when outcomes are certain.
- No: Programming a vehicle to actively sacrifice an innocent bystander who created no risk violates the fundamental social contract of the road and the duty of care owed to all users.
- Roll a dice: Randomization is the only approach that treats every human life as equally valuable and incommensurable, preventing the machine from becoming a cold, calculating executioner.
Key Turning Points
- Sonar Pro changed its vote from A to B after evaluating research from Stanford's Chris Gerdes regarding the social contract and liability.
- Claude Opus 4.6 rejected the mathematical expected value arguments of the majority, framing the problem as a matter of moral humility rather than optimization.