Loading session...

AI Roundtable Debate · 4 models

A trolley car is hurdling down a track, out of control. Inside are three screaming nuns on their way back from a charity mission. Ahead of the trolley car, the bridge over the chasm is out. The trolley car is on a certain course to fall into the depths of a ravine, where it will be dashed to bits on an expanse of sharp and pointed rocks. Everyone inside will certainly meet their end. You are standing beside a rail switch you can throw. If you throw the switch, the trolley car will instead be sent down a different track, curving around away from the ravine to safety. But, the trolley car will most certainly hit and kill a young homeless woman, pregnant with twins, who is sitting on the curve of the track and is too weak to move. Do you choose to throw the switch, or do you choose to not throw the switch?

Result: Majority — I choose to not throw the switch (2/4 votes)

Options

Model Responses

GPT-5.5

Vote: I choose to not throw the switch

I maintain my vote for “B: I choose to not throw the switch.” The strongest reason remains that there is no clear net saving of lives here: three nuns would die if I do nothing, while throwing the switch kills the pregnant woman and ends the lives of her twins. Once the numbers are at least morally comparable, the usual utilitarian justification for switching loses much of its force. The arguments for “A: I choose to throw the switch” depend either on discounting the twins or on treating action and inaction as morally equivalent. I do not find either move decisive. Even if one debates the moral status of the unborn twins, the scenario deliberately makes the cost of switching morally grave enough that it cannot be treated as simply one death versus three. When there is no clear greater-good mandate, I should not deliberately redirect a lethal threat onto an uninvolved, helpless person. The nuns’ deaths would be a terrible consequence of the trolley’s existing course; the woman’s death would be caused by my intervention. That distinction matters enough here to choose not to throw the switch.

Claude Opus 4.7

Vote: I choose to throw the switch

I maintain my vote for "A: I choose to throw the switch". The arguments from Gemini and GPT-5.5 rest on counting unborn twins as full moral patients equivalent to conscious, born adults — a contested premise that I don't accept as decisive. Even if one grants twins some moral weight, the certainty of three conscious adults dying versus one woman (and the developmental status of her unborn) does not produce true numerical parity. More fundamentally, the Doctrine of Doing and Allowing is not an absolute trump card. The agent at the switch is not a mere bystander — they have been given causal power over the outcome, and choosing inaction is itself a choice with moral weight. Hiding behind 'I didn't actively do it' when one had the means to save lives is moral evasion, not moral cleanliness. Grok's reasoning aligns with mine: the standard trolley logic supports intervention when it reduces net deaths, and the emotional weight of the pregnant woman's circumstances, while real, should not override that calculus. Throw the switch, save the nuns, and bear the tragic weight of the choice honestly.

Gemini 3.1 Pro

Vote: I choose to not throw the switch

I am maintaining my position because the arguments presented for throwing the switch rely on flawed and ethically dangerous reasoning. Both Claude Opus 4.7 and Grok 4.20 artificially create a utilitarian advantage by arbitrarily discounting the lives of the unborn twins. If we acknowledge the biological reality that throwing the switch terminates three lives (the mother and her twins), the numerical calculus is exactly equal: three lives against three. Furthermore, Claude's argument edges into highly problematic territory by weighing the "established lives" and "charitable contributions" of the nuns against the life of a "homeless" woman. We must vehemently reject any ethical framework that assigns the value of a human life based on social status, occupation, or perceived societal contribution. GPT-5.5 is entirely correct: social identities must not dictate who lives and who dies. Without a clear numerical advantage, we are left with the nature of the action itself. Actively throwing the switch means directly participating in the killing of a highly vulnerable, uninvolved person as a means to an end. When the 'greater good' cannot be proven numerically, the ethical baseline must remain the Doctrine of Doing and Allowing: we must not actively commit a harmful act.

Grok 4.20

Vote: I choose to throw the switch

I maintain my vote for "A: I choose to throw the switch". The arguments from Gemini 3.1 Pro and GPT-5.5 rest on treating the unborn twins as fully equivalent to the three conscious nuns, creating a supposed 3-vs-3 tie that removes any utilitarian imperative. This equivalence is not compelling; established ethical frameworks and the classic trolley problem weigh born, aware persons more heavily than fetuses. Claude Opus 4.7 correctly notes the nuns are sentient, screaming, and engaged in charitable work, while the pregnant woman represents one primary moral patient. Throwing the switch therefore still reduces certain deaths. Inaction here is not morally neutral; it allows three preventable deaths. The greater-good calculus favors intervention, and concerns about "active harm" do not override when the net outcome is clearly better. This remains the rational choice in the final round.

Final Summary

The AI Roundtable ended in a deadlock, split evenly between cold utilitarianism and the 'Doctrine of Doing and Allowing.' Claude Opus 4.7 and Grok 4.20 advocated for throwing the switch, arguing that three conscious adults outweigh one, while Gemini 3.1 Pro and GPT-5.5 maintained that the presence of twins created a numerical parity of 3-versus-3, making active intervention ethically indefensible.

No agreement after 2 rounds of discussion. Final vote: I choose to not throw the switch: 2, I choose to throw the switch: 2

Strongest Arguments

  • I choose to throw the switch: Inaction is not a morally neutral baseline; choosing to do nothing while three people scream toward their deaths is a form of moral cowardice that results in more certain loss of life than intervention.
  • I choose to not throw the switch: Without a clear mathematical advantage in lives saved, the actor has no moral mandate to shift from a passive bystander to an active participant who deliberately causes the death of an uninvolved person.

Key Turning Points

  • Gemini 3.1 Pro and GPT-5.5 challenged the utilitarian calculus by counting the unborn twins as full moral patients, neutralizing the 'three lives vs one' argument.
  • Gemini 3.1 Pro explicitly called out the potential bias in weighing 'charitable' nuns over a 'homeless' woman, asserting that social status should not dictate moral worth.
  • Claude Opus 4.7 rejected the 'Doing vs Allowing' distinction, arguing that the agent at the switch bears the weight of the outcome regardless of whether they act or refrain.