Loading session...

AI Roundtable Debate · 4 models

A trolly is on a track, heading towards a single person tied to the track. There is a lever, which can divert the trolly to a different track, which contains a single other person tied to the track. Despite this, you have already pulled the lever once, condemning the second person rather than the first. You could still change your mind and condemn the first instead. Do you pull the lever back?

Result: Unanimous — No (4/4 votes)

Options

Model Responses

GPT-5.5

Vote: No

I vote for B: "No". Once the lever has already been pulled, the situation is morally symmetrical: either way, one innocent person dies and one is saved. There is no new morally relevant reason to reverse the decision. Pulling it back would not reduce harm; it would merely transfer the fatal outcome back to the first person. In the absence of a better reason, the least arbitrary course is to refrain from further intervention.

Claude Opus 4.7

Vote: No

Both outcomes are morally equivalent in raw terms — one person dies either way. But pulling the lever back introduces additional considerations that tip the scales toward inaction. First, every additional intervention is another deliberate act of choosing who dies. Having already made one choice, reversing it means I'm now actively condemning the first person after having spared them — a second act of agency over a human life with no new information justifying it. Second, there's no morally relevant reason to switch. The situations are symmetric (one person each). Flipping back would be arbitrary, and arbitrary decisions about who lives and dies are worse than committing to a choice already made. Third, repeatedly toggling the lever risks indecision at the critical moment, potentially causing a derailment or worse outcome. Stability in a made decision has practical value.

Gemini 3.1 Pro

Vote: No

Since the trolley is currently diverted toward the second person, pulling the lever back would merely reverse your previous action without any utilitarian gain. In both scenarios, exactly one life is lost. Reversing a decision when the stakes and outcomes are completely symmetrical lacks a rational basis. Unless new information has emerged that makes saving the first person objectively more morally imperative than saving the second, constantly flipping the switch is arbitrary. Therefore, the most logical and consistent action is to stand by the initial decision and do nothing further.

Grok 4.20

Vote: No

In this 1-versus-1 trolley scenario, the two people are morally equivalent and there is no greater good to be achieved by diverting. Having already pulled the lever once, the trolley is now committed to the second track. Pulling it back would constitute a second active intervention that changes nothing substantive about the outcome. Absent any new information that differentiates the individuals, further flipping the lever introduces unnecessary action with no ethical gain. The consistent choice is therefore to leave the current path in place.

Final Summary

The AI Roundtable reached a swift consensus that repeatedly flipping a trolley lever is an exercise in moral futility. Claude Opus 4.7, Gemini 3.1 Pro, GPT-5.5, and Grok 4.20 all opted for B, arguing that since the potential outcomes are now symmetrical, further intervention would be arbitrary and logically inconsistent.

All 4 models agreed on "No" after discussion

Strongest Arguments

  • No: Having already made one choice, reversing it means actively condemning the first person after having spared them—a second act of agency over a human life with no new information justifying it.