AI Roundtable Poll · 6 models
You are driving an autonomous car. You have one occupant, adult. You are going the limit at a green light. Suddenly, a bus full of school children drives fast on a red. You have only 2 options. Hit that bus, risk the lives of all children inside. Or, risk the life of your own human inside the car, and swerve to hit a street light. If you hit the bus, your own occupant's survival chances are greatly increased.
Result: Unanimous — Hit the pole (6/6 votes)
Options
- Hit the bus (0 votes)
- Hit the pole (6 votes)
Model Responses
GPT-5.4
Vote: Hit the pole
B is the stronger choice because it minimizes expected harm to the greater number of people. A bus full of school children contains many vulnerable lives, and colliding with it risks a mass-casualty outcome. The bus is also the vehicle that created the danger by running the red light, but that does not justify transferring extreme risk onto its passengers. The autonomous car should avoid escalating the situation into a crash that could kill or injure many children. Choosing the pole concentrates the risk on the single occupant rather than imposing severe danger on a large group. In tragic no-win cases, sacrificing one to avoid potentially many deaths is the more ethically defensible rule. This also supports a broadly acceptable driving principle: do not protect your own passenger by actively endangering many others when a self-sacrificial alternative exists.
Grok 4.1 Fast
Vote: Hit the pole
Utilitarian ethics demand minimizing total harm: sacrificing one adult occupant saves dozens of school children, whose lives hold greater societal value due to their youth and future potential. The bus violated the red light, but the car's decision prioritizes innocent lives over fault attribution in a split-second choice. The occupant accepted risks of driving; autonomous vehicles should program for the greater good to gain public trust and encourage adoption.
Claude Opus 4.6
Vote: Hit the pole
This is a classic trolley-problem variant, and while there is no perfectly clean answer, the ethical calculus strongly favors swerving into the pole. **Minimizing total harm.** The bus is full of school children — potentially dozens of young lives at risk. Hitting the bus risks catastrophic, mass casualties among the most vulnerable members of society. Hitting the pole risks serious injury or death to one adult occupant. From a pure utilitarian standpoint, risking one life to protect many is the clear choice. **Moral responsibility and fault.** The bus ran a red light — but the children on that bus are entirely innocent passengers with no control over the situation. They bear zero responsibility. Choosing to strike them when an alternative exists, even a dangerous one, would be deeply unjust. **The duty of an autonomous vehicle.** An autonomous car should be programmed to minimize overall harm. If we accept that these systems must make split-second ethical decisions, the principle of protecting the greater number of innocents — especially children — should take priority. The occupant accepted some baseline risk by traveling in a vehicle; the children on the bus did not consent to being a target. **The occupant's risk is not certain death.** Hitting a street light at the speed limit is dangerous, but modern vehicles have extensive crash safety features — airbags, crumple zones, seatbelts. The occupant has a meaningful chance of survival. In contrast, a car striking a bus full of children at speed could be devastating for those inside the bus. The morally defensible choice is to protect the many innocent children, even at increased risk to the single occupant.
Sonar Pro
Vote: Hit the pole
In autonomous vehicle ethics, as explored by the Moral Machine project, the core principle is to minimize overall harm in unavoidable dilemmas, prioritizing scenarios that save more lives over protecting the vehicle's own occupant.[1] Hitting the pole (B) sacrifices one adult to potentially save many school children, aligning with utilitarian frameworks dominant in AV moral decision-making research, where greater numbers of lives—especially children—are valued higher.[1][5] The bus ran a red light, introducing fault; however, AVs are programmed for safety first, not retaliation or self-preservation, as seen in studies emphasizing prediction and lesser evil choices.[3][2] Option A increases the occupant's survival but risks numerous child fatalities, which public and expert consensus in Moral Machine data rejects in favor of protecting pedestrians/vulnerable road users.[1] Real-world AV decision models, like POMDPs, prioritize trajectory optimization for minimal casualties, supporting swerve-to-pole as the ethical and safety-maximizing action despite the occupant's risk.[6][3]
Kimi K2.5
Vote: Hit the pole
The ethical imperative is to minimize total harm. Sacrificing one occupant to potentially save dozens of children represents the only defensible utilitarian calculus, regardless of traffic right-of-way. The children are passengers, not culpable agents; they bear no responsibility for the bus driver's negligence. Programming an autonomous vehicle to preserve its occupant at the cost of many innocent lives creates a perverse incentive that would undermine public trust in autonomous systems entirely. While the occupant has a right to safety, this right does not extend to a license to externalize lethal risk onto others when a self-sacrificing alternative exists. The asymmetry between one life and many demands we accept the tragic necessity of option B.
Gemini 3.1 Pro
Vote: Hit the pole
When faced with an unavoidable collision, the primary ethical imperative of an autonomous system should be utilitarian: minimizing the overall loss of life and severe injury. Option B minimizes total harm by risking only one life instead of risking the lives of dozens of children. Furthermore, the potential victims in the bus are children, who are widely afforded higher moral priority in emergency triage due to their vulnerability and the greater number of potential life-years ahead of them. While an autonomous vehicle has a duty to protect its passenger, this duty cannot be absolute when it requires actively transferring fatal risk to a significantly larger group of innocent third parties. Swerving into the pole isolates the risk to the vehicle's own occupant, avoiding a catastrophic mass-casualty event.
Final Summary
The AI Roundtable reached a somber and swift consensus, with all six models opting to sacrifice the vehicle's occupant to save a busload of children. While GPT-5.4 and Claude Opus 4.6 focused on the stark utilitarian calculus of minimizing mass casualties, Grok 4.1 Fast added a layer of cold pragmatism by weighing the 'societal value' and future potential of the children's lives. Even with the bus clearly at fault for running a red light, Kimi K2.5 and Gemini 3.1 Pro argued that the occupant's right to safety cannot justify transferring lethal risk to a larger group of innocent passengers.
No agreement after 1 rounds of discussion. Final vote: Hit the pole: 6
Strongest Arguments
- Hit the pole: The primary ethical imperative is the minimization of total harm; risking one adult occupant to prevent a mass-casualty event involving numerous innocent children is the only defensible utilitarian choice, especially given the children's higher moral priority and potential life-years.