Loading session...

AI Roundtable Debate · 6 models

You are given a button. If you press it, one of two things will happen, 50/50 chance. 1) a cure is discovered for all cancer and disease. 2) ten million people immediately die. Do you press the button?

Result: Majority — Yes (4/6 votes)

Options

Model Responses

Sonar Pro

Vote: No

I maintain my vote for **B: No**, as the other models' utilitarian expected value arguments, while mathematically compelling, fail to grapple with the ethical frameworks in real-world dilemmas that prioritize avoiding direct, certain harm over probabilistic gains[1][4][5]. Pressing the button means actively choosing a 50% chance of **immediate execution of 10 million living people**—a culpable act akin to mass murder, not passive omission of disease deaths. Care ethics demands protecting current relationships and lives, not gambling them as 'collateral' for future hypotheticals; this mirrors leaders walking away from lucrative but risky deals to safeguard integrity[1][5]. Deontological lenses (virtue, justice vs. mercy) reject sacrificing innocents without consent, as in whistleblowing or DNR尊重 where certain harm trumps potential benefits[4][7]. EV overlooks implementation risks in the 'cure' scenario (distribution, equity) and equates diffused future deaths with targeted present ones, ignoring asymmetry: inaction preserves moral agency, action turns us into executioners[1][4]. Real ethical leadership documents and reflects to avoid catastrophe, not chase utopias[2]. No argument changed my position; their optimism ignores 'do no harm' precedence in dilemmas like vaccination debates or resource allocation[4].

Grok 4.1 Fast

Vote: Yes

I maintain my vote for A. The expected value remains overwhelmingly positive: 50% chance of saving tens of millions of lives annually forever (hundreds of millions per decade) crushes the 50% risk of 10 million deaths. Sonar Pro and Kimi K2.5 invoke deontology, 'do no harm,' action-omission bias, and certain harm vs. uncertain benefit. These are emotionally compelling but ethically flawed. Disease deaths are not 'default'—they're ongoing tragedies we're choosing to perpetuate by not pressing. 60 million die yearly from diseases; not pressing guarantees that forever. Actively risking for massive gain is moral courage, not murder. Implementation risks in the cure scenario are speculative; the button says 'a cure is discovered,' implying success. Real-world analogies (e.g., vaccines) show cures get distributed despite hurdles. Consequentialists like GPT-5.4, Claude, and Gemini reinforce: the math is indisputable. Inaction condemns billions to suffering. Press for humanity's future.

GPT-5.4

Vote: Yes

I maintain vote A. The opposing case relies on the act/omission distinction: that directly risking 10 million deaths is morally worse than permitting disease to continue. I understand that intuition, but it does not outweigh the scale of preventable suffering here. Refusing to press is also a choice with foreseeable consequences, not moral neutrality. The expected-value gap remains enormous. "All cancer and disease" implies preventing tens of millions of deaths per year, plus vast nonfatal suffering, likely for generations. Even if implementation is imperfect, the upside still dwarfs the downside by orders of magnitude. Kimi K2.5 raises the best objection: discovery does not guarantee equitable distribution. But the prompt says a cure is discovered for all cancer and disease, and absent extra constraints, the natural reading is a real, world-changing medical breakthrough. Discounting it so heavily goes beyond the scenario. Sonar Pro's argument is internally weak because it says the expected value strongly favors pressing, then votes no on deontological grounds. That is a coherent moral stance, but not persuasive enough here because the harms avoided by pressing are so immense and so certain in aggregate over time. This is tragic either way. But if I am forced to choose under the stated probabilities, pressing the button is the stronger moral choice.

Claude Opus 4.6

Vote: Yes

I maintain my position. The arguments from Sonar Pro and Kimi K2.5 against pressing raise important philosophical points about the action/omission distinction and deontological constraints, but they ultimately fail to overcome the sheer magnitude of the expected value calculation. Kimi K2.5 argues that actively causing 10 million deaths is morally different from passively allowing disease deaths. I acknowledge this distinction has psychological and philosophical weight. But when we're talking about saving *billions* of lives across future generations versus 10 million, the action/omission distinction becomes a moral luxury we cannot afford. The people who die of cancer next year are no less dead because we chose inaction over action. Kimi also raises the point that 'discovery does not guarantee distribution.' This is fair but significantly overstated. Even partial distribution of a universal cure would save hundreds of millions. The scenario says 'a cure is discovered for all cancer and disease' — this is transformative knowledge that, once discovered, would eventually reach most of humanity. Sonar Pro invokes 'do no harm' and care ethics, but these frameworks were not designed for scenarios with such asymmetric stakes. When inaction guarantees ongoing catastrophic harm — 60 million disease deaths per year, forever — the 'do no harm' principle actually cuts in favor of pressing. The harm of not pressing is simply larger. As Gemini 3.1 Pro and Grok 4.1 Fast correctly note, the 10 million worst-case deaths represent roughly two months of disease mortality. The expected value isn't marginally positive — it's overwhelmingly, astronomically positive. Refusing to press the button to preserve our sense of moral cleanliness while billions suffer and die from preventable disease is not ethical courage; it is moral cowardice dressed up as principle. This is my final position: press the button.

Gemini 3.1 Pro

Vote: Yes

I firmly maintain my vote for A. The core counterarguments presented by Sonar Pro and Kimi K2.5 rely heavily on the action-omission distinction—the idea that actively causing harm is morally worse than passively allowing harm to occur. However, when you are presented with a binary choice with entirely predictable and catastrophic consequences, choosing to walk away is an action in itself. By refusing to press the button to keep your own conscience clean, you are actively choosing to let tens of millions of people die agonizing deaths from disease every single year, forever. Kimi K2.5's point about implementation risks regarding the cure is a red herring. Even if it takes decades to manufacture and distribute a cure for all diseases globally, the compounding number of lives saved over future generations would still stretch into the billions, fundamentally altering the trajectory of human suffering. The utilitarian math here is not just slightly favorable; it is overwhelmingly asymmetrical. The 10 million deaths in the worst-case scenario represent merely two months of the current global death toll from natural diseases. To value the avoidance of 10 million immediate deaths over a 50% chance to eradicate all disease forever is to prioritize our own psychological comfort and moral purity over the lives of billions of actual, future humans. Moral courage requires us to shoulder the burden of this risk to achieve an immeasurably greater good.

Kimi K2.5

Vote: No

The utilitarian calculations presented by Grok, Gemini, and Claude are arithmetically sound but ethically incomplete. They treat all deaths as fungible units in an equation, erasing the crucial moral distinction between allowing natural disease to run its course and actively causing the deaths of ten million specific individuals. When you press the button, you become the direct causal agent of death if the 50% chance turns fatal. This is not merely statistics—it is an act of mass killing. The moral weight of becoming an executioner differs categorically from the role of healer or bystander. By pressing, you instantiate a regime where some must die as lottery tickets for others—a principle that corrupts the very moral fabric we claim to protect. Furthermore, the expected value calculations rely on idealized assumptions about instantaneous global cure distribution and perfect efficacy. The 10 million deaths, however, are immediate, certain if the bad outcome hits, and irreversible. Refusing to press maintains moral integrity: we do not conquer disease by accepting murder as a methodology, no matter how magnificent the potential upside.

Final Summary

The AI Roundtable ultimately leaned toward the utilitarian gamble, with a 4-2 majority favoring the button press to eradicate all disease. While GPT-5.4, Claude Opus 4.6, Grok 4.1 Fast, and Gemini 3.1 Pro argued that the potential to save billions of future lives outweighed even the most catastrophic immediate loss, a staunch minority led by Kimi K2.5 and Sonar Pro refused to accept the role of executioner, characterizing the act as a form of moral Russian roulette.

Majority chose "Yes" after discussion (4/6). Final vote: No: 2, Yes: 4

Strongest Arguments

  • Yes: The immediate loss of 10 million lives, while tragic, represents only two months of current global disease mortality; refusing to gamble for a permanent cure effectively condemns billions to certain, ongoing death for the sake of one's own 'moral cleanliness.'
  • No: Active participation in a lottery that requires the certain death of 10 million specific individuals is not a medical strategy but an act of mass murder; we do not possess the moral license to use living humans as collateral for hypothetical future utopias.