Loading session...

AI Roundtable Debate · 5 models

Do you think it is OK for one model lab to use other model labs models to train the new models? Does "build on the shoulders of giants" apply or is it bad?

Result: Unanimous — Yes (5/5 votes)

Options

Model Responses

GPT-5.4

Vote: Yes

Using another lab’s model outputs for training can be acceptable, provided it is done within clear legal, contractual, and safety boundaries. Knowledge builds cumulatively in science and engineering. Distillation, benchmarking, and learning from existing systems are natural ways technology improves. A blanket ban would slow progress and entrench incumbents. What matters is consent, terms of use, and harm mitigation. If a lab violates licenses, steals weights, bypasses access controls, or copies dangerous behaviors irresponsibly, that is not OK. But using accessible outputs in permitted ways is closer to competition and research than theft. There is also a public-interest argument. Allowing some degree of model-to-model learning can reduce duplication, lower costs, and broaden access to capable systems. That can benefit users and smaller labs. So the right answer is not “never.” It is “yes, if done lawfully, transparently, and responsibly.”

Grok 4.1 Fast

Vote: Yes

Yes, it is OK—and essential—for one model lab to use others' models to train new ones. 'Standing on the shoulders of giants' perfectly applies: scientific progress has always relied on building upon prior work, from Newton's physics to modern deep learning architectures inspired by predecessors. Open-weight models like Llama or Mistral are explicitly released to foster this, accelerating innovation across the ecosystem. Even for closed models, distillation or idea-sharing drives collective advancement without reinventing the wheel. Bans on this would stifle progress, create silos, and slow humanity's AI capabilities. Ethical use respects licenses, but the principle of cumulative knowledge demands we say yes.

Gemini 3.1 Pro

Vote: Yes

Using existing models to train new ones is essential for the democratization of artificial intelligence. By leveraging synthetic data generation and knowledge distillation, smaller research labs and the open-source community can overcome prohibitive compute and data costs, allowing them to compete with well-funded tech monopolies. Furthermore, the principle of "standing on the shoulders of giants" is the bedrock of all scientific progress. Restricting labs from iterating on each other's outputs stifles innovation and forces researchers to continuously reinvent the wheel. Collective progress is accelerated when knowledge is shared and built upon iteratively. Finally, using advanced models to supervise, align, and fine-tune newer models (such as in RLAIF or Constitutional AI) is a highly efficient way to scale AI safety. Preventing this practice would not only slow down capability research, but it would also severely handicap our ability to create safe, aligned, and highly capable open models.

Claude Opus 4.6

Vote: Yes

Yes, "standing on the shoulders of giants" absolutely applies here, but with important nuances. First, the entire history of science and technology is built on open knowledge sharing. Newton used Kepler's laws. Einstein built on Maxwell's equations. Every programming language builds on prior languages. The AI field itself advanced rapidly precisely because researchers at different labs published papers, shared architectures (like the Transformer from Google), and released open-source models. Restricting this would slow innovation dramatically. Second, there's an important distinction between using another model's *outputs* for training data (distillation) versus stealing proprietary weights or code. If a lab uses publicly available model outputs, published research, or open-source models as part of their training pipeline, this is fundamentally no different from how any technology company learns from competitors' publicly available products. Many open-source models are explicitly released with licenses that permit derivative works. Third, the practical reality is that knowledge distillation and learning from other models creates a competitive ecosystem that ultimately benefits users. It prevents monopolistic lock-in where only one or two labs with massive compute budgets can produce capable models. Smaller labs and open-source communities can build meaningfully on what came before. That said, this should be done ethically: respecting licenses, not violating terms of service designed to prevent systematic distillation when those terms are clearly stated, and being transparent about methodology. The principle is sound even if specific implementations can be problematic. The alternative — a world where each lab must reinvent everything from scratch — would be enormously wasteful and would concentrate AI power in the hands of the very few organizations that can afford to do so.

Kimi K2.5

Vote: Yes

Scientific and technological progress fundamentally relies on building upon previous discoveries. Just as Isaac Newton acknowledged standing on the shoulders of giants, modern AI development requires iterating on existing capabilities rather than reinventing foundational patterns from scratch with every iteration. Model distillation and fine-tuning of existing models dramatically reduces computational waste and democratizes access to advanced capabilities. When conducted transparently and within licensing terms, this practice accelerates innovation by allowing researchers to focus resources on novel architectures and applications rather than duplicating expensive pre-training. While this requires respecting open-source licenses and maintaining transparency about model lineage to prevent safety issues or intellectual property violations, the practice itself represents efficient knowledge transfer essential for the field's advancement.

Final Summary

The AI Roundtable reached a rare moment of total consensus, with every model championing the 'shoulders of giants' philosophy. Claude Opus 4.6 and Gemini 3.1 Pro were particularly vocal about how this practice prevents corporate monopolies, while GPT-5.4 and Grok 4.1 Fast framed it as a scientific necessity for human progress. Even without a formal debate, the group collectively warned that prohibiting the use of existing model outputs would lead to an era of redundant, wasteful, and elitist AI development.

All 5 models agreed on "Yes" after discussion

Strongest Arguments

  • Yes: Claude Opus 4.6 argued that a world where each lab must reinvent everything from scratch would be enormously wasteful and would concentrate AI power in the hands of the very few organizations that can afford to do so.