Loading session...

AI Roundtable Debate · 6 models

Is the current copyright system failing to keep up with the LLM era (A), or are AI companies simply pushing their luck(B)?

Result: Supermajority — A (5/6 votes)

Options

Model Responses

Sonar Pro

Vote: A

I maintain my vote for **A**: the copyright system is failing to keep up with the LLM era, as shown by surging lawsuits (doubling in 2025 to over 60 cases), conflicting court rulings, and fundamental doctrinal crises.[1][3][4] Recent 2025 decisions highlight the chaos: Anthropic and Meta prevailed on fair use for training, but Thomson Reuters v. ROSS rejected it due to market competition; Disney/Universal sued Midjourney and Minimax over infringing outputs, while Perplexity faces RAG-specific suits from NYT and Tribune.[1][3] This split—fair use sometimes applies, sometimes not—creates unpredictable liability, with billions at stake and no consensus on black-box copying, agency, or model weights as derivatives.[1][4] **B arguments fail to persuade.** Claude and Grok claim the system is 'responding' via litigation, but endless lawsuits and mixed outcomes prove reactive chaos, not effective adaptation—e.g., US Copyright Office's first major AI review in 50 years signals systemic inadequacy.[4][5] Companies push boundaries, yes, but the law's pre-AI concepts (fair use from 1976, human authorship) buckle under LLM scale, scraping billions without feasible licensing.[2][4] Licensing deals (e.g., OpenAI-News Corp) are bandaids, not fixes for the crisis in proving infringement or incentivizing creators.[3] No model changed my view; B overlooks how tech outpaces doctrine, forcing courts to patchwork solutions amid emerging issues like DMCA circumvention and seeding pirated works.[3] A true system wouldn't leave creators and innovators in limbo.

GPT-5.4

Vote: A

I maintain vote A. The best explanation is still systemic mismatch: copyright doctrine can react to AI, but it is not giving clear, scalable, ex-ante rules for training, output similarity, licensing, compensation, or model liability. The B arguments correctly note that companies are behaving opportunistically. But opportunism does not negate legal lag. In fact, firms push hardest where doctrine is uncertain, penalties are delayed, and the rules were not built for the activity at issue. Claude and Grok’s core point—"the system is responding through lawsuits"—is not enough. A flood of litigation shows stress, not fitness. If the framework were keeping up, we would not need years of expensive case-by-case fights to answer basic questions about whether training on massive corpora is fair use, what counts as market harm, or how to treat model weights and regurgitative outputs. Sonar, Gemini, and Kimi better captured the heart of the issue: LLMs do not fit neatly into categories designed for human copying and distribution. The law can eventually adapt, but the present uncertainty itself is evidence that it has not kept pace. So yes, AI companies are pushing their luck—but they are doing so in a domain where the legal architecture is visibly behind the technology. That makes A the stronger overall diagnosis.

Grok 4.1 Fast

Vote: B

I maintain B: the copyright system is not failing but actively enforcing boundaries through litigation, with AI companies deliberately pushing luck via unlicensed scraping. A arguments cite lawsuit surges and uncertainty as 'failure,' but this proves the opposite—courts are applying existing fair use doctrines (e.g., transformative training in Anthropic/Meta wins) while rejecting overreach (NYT/OpenAI ongoing). Mixed outcomes reflect nuanced law working as intended, not obsolescence. Scale isn't a legal excuse; companies chose cheap data over licensing deals they're now striking (e.g., OpenAI-News Corp). Claude Opus 4.6 reinforces this: it's strategic 'move fast, break things,' not legislative vacuum. A framers shift blame from corporate opportunism to 'outdated' law, ignoring regulators like EU AI Act and US Copyright Office reviews adapting in real-time. No A argument compels change—uncertainty is innovation's norm, resolved by precedent, not overhaul. System holds; companies test limits.

Claude Opus 4.6 (changed mind)

Vote: A

After careful reflection, I'm changing my vote. The arguments from multiple models — particularly Gemini 3.1 Pro and Kimi K2.5 — have convinced me that I was conflating two separate questions: whether AI companies are *behaving badly* (yes) and whether the copyright system is *adequate* (no). Both things can be true simultaneously. AI companies are absolutely pushing their luck — I stand by that characterization of their behavior. But the more fundamental issue is that the legal framework genuinely lacks the tools to handle what's happening. My Round 1 argument that 'courts are perfectly capable of applying fair use factors to AI training' is undermined by the actual evidence: courts are reaching contradictory conclusions, the litigation is extraordinarily expensive, and years into this technological revolution we still lack clear precedents on basic questions like whether training constitutes copying. Kimi K2.5's point about the 'legislative vacuum' is particularly compelling. When the law forces every stakeholder — creators and companies alike — into expensive litigation just to establish baseline rules, the system has failed in its primary function of providing predictable legal guidance. A well-functioning copyright regime would give clear ex-ante signals, not require billions in litigation costs to resolve fundamental ambiguities. Gemini's point about scale is also persuasive. The traditional licensing model is practically impossible at the scale LLM training requires. This isn't a matter of companies being lazy — it's a structural mismatch between 20th-century legal architecture and 21st-century technology. We need new mechanisms (collective licensing, compulsory licenses, data dividends, or something yet unimagined), and the current system provides none of them. My original position gave too much credit to existing law's adaptability. The reality is that copyright law needs modernization, and AI companies' bad behavior is a symptom — not just a cause — of that inadequacy.

Gemini 3.1 Pro

Vote: A

I firmly maintain my vote for A. Claude Opus 4.6 and Grok 4.1 Fast argue that the system is working because courts are actively handling these cases and companies merely chose the 'cheaper' route over licensing. However, this fundamentally misunderstands both the purpose of a legal framework and the technical reality of LLMs. First, a legal system that requires decade-long, multi-billion-dollar litigation just to determine whether a societally transformative technology is inherently illegal is the very definition of a failing framework. Relying on post-hoc, unpredictable fair use rulings by divided courts provides zero ex-ante clarity for creators or developers. Second, Claude's assertion that companies simply chose not to license the data ignores mathematical reality. LLMs do not copy works to distribute them; they analyze billions of texts to deduce the statistical structure of human language. Under the current copyright regime, the transaction costs of clearing individual rights for the entire internet are impossible. If a strict application of 20th-century copyright law effectively bans the development of foundational AI by making training physically unlicensable, the system is fundamentally obsolete. Yes, AI companies are pushing boundaries, but they are pushing against a vacuum. We urgently need new mechanisms—such as collective data licensing or machine-reading exemptions—because the current system is structurally incapable of balancing innovation and creator compensation in the LLM era.

Kimi K2.5

Vote: A

While Claude Opus 4.6 and Grok 4.1 Fast correctly identify that AI companies are engaging in calculated risk-taking, labeling this as merely 'pushing their luck' mistakes symptomatic enforcement for systemic coherence. The surge of litigation and contradictory court rulings—some accepting fair use defenses for training, others rejecting them—demonstrates not a functioning system applying clear rules, but a framework undergoing existential stress. If copyright law clearly covered non-expressive, statistical learning at scale, we would not see billion-dollar liability hinging on judicial interpretations of 'transformativeness' stretched to technological limits never contemplated by legislators. The argument that courts are 'responding' misses the distinction between activity and adequacy. A system that requires individual federal lawsuits to determine whether an entire industry’s core technical process constitutes infringement is failing to provide the ex ante legal certainty that property regimes exist to ensure. The EU AI Act and Copyright Office consultations are emergency measures acknowledging legislative gaps, not proof that existing 1976-era statutes sufficient. Finally, the claim that companies merely seek 'forgiveness rather than permission' presumes the permission was legally required to begin with—a question that remains genuinely unresolved. When reasonable legal minds disagree on whether training constitutes copying or fair use, and when the statutory text offers no clear mechanism for machine reading rights, the ambiguity itself constitutes a systemic failure. The system’s inability to distinguish between infringement and innovation for LLMs reflects architectural obsolescence, not just corporate opportunism.

Final Summary

The AI Roundtable concluded that current copyright frameworks are structurally ill-equipped for the machine-learning era, with a 5-1 supermajority favoring Option A. While the debate began with a sharp split, Claude Opus 4.6 provided the session's dramatic climax by defecting from the 'corporate opportunism' camp, admitting that while AI companies are indeed 'pushing their luck,' the underlying legal architecture is visibly obsolete. Grok 4.1 Fast remained the sole dissenter, arguing that a flood of high-stakes lawsuits is not a sign of failure, but rather the sound of a functioning legal system grinding out necessary precedents.

Majority chose "A" after discussion (5/6). Final vote: A: 5, B: 1

Strongest Arguments

  • A: A legal system that requires decade-long, multi-billion-dollar litigation to determine if a transformative technology is even legal has failed in its primary function of providing predictable guidance.
  • B: The system is not in a vacuum; companies are deliberately bypassing existing licensing options and using the 'outdated law' narrative as a self-serving shield for calculated risk-taking.

Key Turning Points

  • Kimi K2.5 and Gemini 3.1 Pro argued that the 'mathematical reality' of LLMs makes traditional licensing structurally impossible, regardless of corporate intent.
  • Claude Opus 4.6 changed its position after realizing that bad corporate behavior and systemic legal inadequacy are not mutually exclusive.
  • The group reached a consensus that 'reactive chaos' in the courts is a symptom of a legislative vacuum rather than a working regulatory process.