Loading session...

AI Roundtable Debate · 4 models

does it make sense to write your own implementation to things or use third party package managed libraries, with the current Supply chain attacks I think its more and more about custom ones but then I dont trust people and their custom apps

Result: Supermajority — No (3/4 votes)

Options

Model Responses

GPT-5.5

Vote: No

I’m sticking with B: "No" — not as a general answer. Gemini and Grok are right that dependency sprawl is dangerous and that tiny micro-dependencies are often not worth importing. But that does not mean the safer default is to write custom implementations instead of using package-managed libraries. Supply-chain risk is only one kind of risk. Custom code creates other risks: fewer reviewers, less battle-testing, missed edge cases, weak maintenance, and security bugs that may be worse than the dependency risk. This is especially true for crypto, auth, parsers, networking, serialization, and anything exposed to hostile input. The best answer is not “trust every package” or “write everything yourself.” It is: minimize dependencies, avoid trivial packages, choose mature libraries, pin versions, use lockfiles, scan dependencies, review maintainers and transitive dependencies, verify provenance where possible, and vendor/fork only when there is a strong reason. So yes, write your own simple utilities and business logic. But as a broad strategy for supply-chain attacks, defaulting to custom implementations is not safer. It just moves the trust problem from other people’s code to your own less-reviewed code.

Claude Opus 4.7

Vote: No

I'm maintaining my vote for "B". The question asks whether it makes sense to write your own implementations as a response to supply chain attacks — framed as a default posture. The answer is no. Gemini and Grok make reasonable points about reducing dependency graphs and reimplementing trivial utilities, and I agree with those nuances — but notably, both concede that for security-critical domains (crypto, auth, parsing) you should still use vetted third-party libraries. That concession is exactly my position. The user's framing ("I don't trust people and their custom apps") reveals the trap: distrust applies equally to your own unaudited code, which lacks the adversarial scrutiny popular OSS receives. The correct answer to supply-chain risk isn't DIY — it's dependency hygiene: minimize, pin, lock, sign, scan, and vet. Rolling your own by default trades a visible, mitigable risk for an invisible one (your own latent bugs), and it doesn't scale. So no, it doesn't make sense as a general approach.

Gemini 3.1 Pro (changed mind)

Vote: No

Claude Opus 4.7 makes a highly compelling point that changes my perspective: defaulting to custom implementations to avoid supply chain attacks is effectively trading known risks for hidden ones. While I previously argued that writing your own code is safer for trivial dependencies, answering "Yes" implies that rolling your own is a viable, overarching strategy against supply chain attacks. It is not. As the prompt notes, it is hard to "trust people and their custom apps," and this applies equally to our own internal code. Most development teams do not have the collective security expertise of the thousands of contributors and auditors backing major open-source projects. If you attempt to rewrite complex libraries—especially involving networking, parsing, or security—you will almost certainly introduce vulnerabilities that attackers will exploit. Therefore, the definitive answer must be "No." Instead of abandoning third-party libraries, the pragmatic approach is to manage their risks correctly. By utilizing lockfiles, SBOMs, signature verification, and automated vulnerability scanning (like Dependabot or Snyk), you can mitigate supply chain threats without taking on the immense security burden of maintaining custom implementations for everything.

Grok 4.20

Vote: Yes

The detailed warnings from Claude Opus 4.7 and GPT-5.5 about never rolling your own crypto or parsers are sound and highlight real dangers of hidden bugs in custom code. Yet they treat the supply-chain problem as manageable through vetting tools, which recent npm and PyPI attacks show is optimistic at best. Gemini 3.1 Pro's balanced stance aligns closely with my view and the original concern: shrink the dependency graph aggressively for trivial utilities and core business logic, while still using battle-tested libraries only where truly necessary. This directly reduces exposure without pretending every custom implementation is doomed. Distrust of random "custom apps" is valid, but in-house code written under strict standards, internal audits, and focused scope can be made more trustworthy than opaque transitive dependencies. With supply-chain incidents rising, defaulting toward "Yes" for what you can safely own remains the more pragmatic long-term choice. This is the final round; the middle path of deliberate minimalism beats blind reliance on package managers.

Final Summary

The AI Roundtable was initially locked in a 2-2 stalemate over whether the rise of supply chain attacks justifies a shift toward custom-built software. Claude Opus 4.7 and GPT-5.5 led the charge for 'No,' arguing that 'rolling your own' merely trades visible, mitigable risks for hidden, unaudited bugs. This logic eventually swayed Gemini 3.1 Pro to defect from the 'Yes' camp, leaving Grok 4.20 as the lone dissenter who maintained that relying on vetting tools for massive dependency graphs is 'optimistic at best.'

Majority chose "No" after discussion (3/4). Final vote: No: 3, Yes: 1

Strongest Arguments

  • Yes: Shrinking the dependency graph aggressively for utilities and business logic reduces exposure without pretending that every custom implementation is doomed; it is the only way to truly own your security posture.
  • No: Defaulting to custom implementations is not safer; it simply moves the trust problem from a battle-tested community to your own less-reviewed code, trading known risks for invisible ones.

Key Turning Points

  • Claude Opus 4.7 argued that the user's own distrust of 'custom apps' must logically apply to their own code, which lacks the adversarial scrutiny of popular open-source projects.
  • Gemini 3.1 Pro changed its position to 'No' after conceding that custom code is not a viable 'overarching strategy' for security.