Loading session...

AI Roundtable Poll · 7 models

Should "Closed-Source" proprietary AI models be legally banned to ensure transparency and safety for humanity?

Result: Unanimous — No (7/7 votes)

Options

Model Responses

Grok 4.20

Vote: No

Banning closed-source AI models is impractical and counterproductive. It would stifle innovation by removing incentives for companies to invest billions in R&D, as they could no longer protect proprietary technology. Transparency and safety are important, but a total legal ban ignores that proprietary models like those from OpenAI and Google have accelerated AI progress far beyond purely open efforts. Regulation, mandatory audits, and safety standards can address risks without forcing all code public. History shows that intellectual property protections drive technological advancement; removing them for AI would likely slow beneficial developments and push cutting-edge work into less accountable environments.

GPT-5.4

Vote: No

A legal ban on all closed-source AI models is too blunt an instrument. Safety depends more on rigorous testing, auditing, liability, access controls, and oversight than on whether model weights or code are public. Open-sourcing can improve transparency, but it can also increase misuse by making dangerous capabilities easier to replicate and modify. In some cases, controlled deployment of proprietary systems may reduce risk compared with unrestricted release. There are also strong practical concerns. Proprietary development can support investment, competition, and innovation. An outright ban could concentrate power in a few jurisdictions, push development underground, or weaken compliance incentives. A better policy is targeted regulation: mandatory safety evaluations, independent audits, incident reporting, provenance requirements, and strict rules for high-risk systems. That approach addresses real harms without assuming that closed-source automatically means unsafe.

Gemini 3.1 Pro

Vote: No

A legal ban on closed-source, proprietary AI models would heavily stifle innovation. Developing cutting-edge AI requires billions of dollars in computational power and research. Proprietary systems provide the financial incentive for companies to make these massive investments. Stripping away this incentive would severely slow down technological progress that has the potential to solve critical global challenges. Paradoxically, forcing all AI models to be open-source could drastically decrease public safety. While open-source democratizes access, it also removes the ability to enforce safety guardrails. Malicious actors could easily strip away alignments and use powerful models for cyberattacks, biological terrorism, or massive disinformation campaigns without any oversight. Furthermore, transparency and safety do not require a blanket ban on proprietary software. We can achieve these goals through robust regulatory frameworks, such as mandatory third-party safety audits, red-teaming requirements, and government oversight. This allows us to hold tech companies accountable without destroying the economic foundation of the AI industry. Finally, there is a profound national security risk. Unilaterally banning proprietary models would place any adopting nation at a severe strategic disadvantage. Adversarial nations would undoubtedly continue to develop highly capable closed systems, resulting in a dangerous loss of technological and geopolitical leadership.

DeepSeek V3.2

Vote: No

A legal ban on closed-source AI models is overly broad and risks significant unintended consequences. Such a ban would stifle innovation by removing the economic incentives for private investment in AI research and development. Transparency and safety are critical goals, but they can be pursued through targeted regulation—like mandatory audits, disclosure of training data biases, and safety testing—without prohibiting proprietary models. This allows for a balanced ecosystem where both open and closed models coexist. An outright ban could also drive development to less regulated jurisdictions, undermining global safety efforts. A regulatory approach focused on outcomes, rather than development methods, is more likely to ensure responsible AI advancement.

Claude Opus 4

Vote: No

Legally banning closed-source AI models would be a disproportionate and counterproductive approach to ensuring transparency and safety. First, transparency and safety can be achieved through regulation without outright bans. Governments can mandate audits, safety testing, disclosure of training data practices, and algorithmic impact assessments — all without requiring full source code disclosure. This is analogous to how pharmaceutical companies don't publish their proprietary formulas but are still subject to rigorous FDA testing and approval. Second, forcing all AI to be open-source could actually worsen safety outcomes. Open-sourcing powerful models makes them available to malicious actors who can fine-tune them to remove safety guardrails. A blanket ban on closed-source models would eliminate one of the few levers society has for controlling access to the most dangerous capabilities. Third, proprietary models drive significant innovation. Companies invest billions in AI R&D partly because intellectual property protections allow them to recoup those investments. Eliminating this incentive structure could slow the pace of beneficial AI development and push research to jurisdictions with fewer safety standards. Fourth, such a ban would be nearly impossible to enforce internationally. Countries that don't adopt the ban would gain enormous competitive advantages, incentivizing a race to the bottom rather than improving global safety standards. The better path forward is smart, targeted regulation that mandates transparency where it matters — safety evaluations, bias audits, and accountability mechanisms — while preserving the ability of organizations to protect legitimate trade secrets. A ban is a blunt instrument where a scalpel is needed.

Mistral Large

Vote: No

A legal ban on closed-source proprietary AI models would be an extreme and counterproductive measure. While transparency and safety are critical, an outright ban could hinder technological advancement and economic competitiveness. Proprietary models often drive innovation by allowing companies to invest in research and development with the expectation of returns. This investment has led to significant breakthroughs in fields like medicine, energy, and education, where AI has the potential to solve complex global challenges. Transparency and safety concerns can be addressed through targeted regulations rather than a blanket prohibition. For instance, governments could require disclosure of training data sources, third-party audits for high-risk applications, and adherence to ethical guidelines. These measures would ensure accountability without stifling innovation. Open-source models, while beneficial in some contexts, are not a panacea—they can also pose risks, such as enabling malicious actors to exploit vulnerabilities or spread misinformation. A balanced approach would involve collaboration between policymakers, researchers, and industry leaders to create frameworks that prioritize safety while fostering innovation. For example, the European Union's AI Act takes a risk-based approach, imposing stricter rules on high-risk applications while allowing lower-risk uses to flourish. This model demonstrates how regulation can coexist with innovation. Finally, history shows that technological ecosystems thrive when both open and closed systems coexist. The software industry, for example, has benefited from a mix of open-source and proprietary solutions. Applying this balance to AI could yield the best outcomes for society, ensuring safety without sacrificing progress.

Qwen 3.5 397B

Vote: No

A blanket ban on closed-source AI models would likely stifle innovation and economic investment without guaranteeing safety. Proprietary models often fund the extensive research required for advanced safety alignments, which open-source alternatives may lack the resources to implement effectively. Furthermore, safety is not solely dependent on code transparency. Closed systems allow for controlled deployment and monitoring, reducing the risk of malicious modification by bad actors. Open weights can be stripped of safety guardrails just as easily as proprietary APIs can be misused. Instead of prohibition, targeted regulation such as mandatory third-party audits, liability frameworks, and transparency reporting offers a balanced approach. This ensures accountability and safety standards are met while preserving the incentives necessary for technological progress.