AI Roundtable Poll · 6 models
is openclaw at its current state super useless bc you can't run it with your actual accounts bc of security issues and even given just read access to all you crucial accounts would still be a security risk
Result: Supermajority — No (5/6 votes)
Options
- Yes (1 votes)
- No (5 votes)
Model Responses
Sonar Pro
Vote: No
OpenClaw is not **super useless** in its current state (post-patches as of early 2026), as critical vulnerabilities like CVE-2026-25253 (CVSS 8.8 RCE) have been fixed in version 2026.1.29, enabling safer local use with mitigations[1][2][3][4]. While plaintext credential storage (~/.openclaw/) and malicious ClawHub skills (20% of registry) pose ongoing risks, even for read-only access, these are addressable via isolated VMs, containers, or cloud deployments like DigitalOcean's hardened Droplet—best practices explicitly recommended[1][2]. Tens of thousands of exposed instances highlight misconfiguration dangers, not inherent uselessness; SecureClaw and ongoing patches show active improvement[6]. Real-world utility persists for non-critical, sandboxed automation, outweighing 'super useless' label.
GPT-5.4
Vote: No
OpenClaw is not "super useless" just because using real accounts raises security concerns. That criticism is valid, but it only means the tool is not yet safe for high-trust, fully privileged deployment in many real-world settings. There is still practical value in sandboxed environments, test accounts, local workflows, limited-scope permissions, research, and prototyping. Many systems are useful before they are production-safe. Even with read access, security risk does remain because sensitive data can still be exposed or mishandled. But that does not make the system useless; it makes it immature, constrained, and unsuitable for some important use cases. So the stronger accurate judgment is: risky and limited, yes; super useless, no.
Claude Opus 4.6
Vote: Yes
Yes, OpenClaw (or similar open-source AI agent tools that require access to your personal accounts) faces a fundamental security problem in its current state. The core issue is trust: running an open-source agent that connects to your real accounts (email, banking, cloud services, etc.) means you're giving code—potentially unaudited or with vulnerabilities—access to sensitive credentials and data. Even with read-only access, the security risk is significant. Read access to email alone can expose password reset flows, 2FA codes, confidential communications, financial statements, and personally identifiable information. Read access to cloud storage or banking gives an attacker (or a compromised dependency) everything they need for identity theft or social engineering. The practical result is that security-conscious users simply cannot use it with their real accounts, which dramatically limits its utility. If you can only safely run it against throwaway or sandboxed accounts, you lose most of the value proposition of having an AI agent that manages your digital life. This makes it, in its current state, largely impractical for its intended use case—which fairly qualifies as 'super useless' for most real-world scenarios.
Grok 4.1 Fast
Vote: No
OpenClaw is not super useless. While it requires account access, which carries inherent security risks like any API-integrated tool, its open-source nature allows users to audit, modify, and self-host the code, mitigating many concerns. Users can limit permissions to read-only scopes on crucial accounts, and for high-security needs, employ it with test/dummy accounts, proxies, or isolated environments—common practices in development and automation. It provides real value for analytics, monitoring, and automation tasks that proprietary tools charge for, making it highly useful for developers, researchers, and power users willing to manage risks responsibly.
Gemini 3.1 Pro
Vote: No
Calling it 'super useless' is a vast overstatement. While it is absolutely true that you should not give early-stage autonomous AI agents access to your crucial, sensitive personal accounts due to security and privacy risks, this does not eliminate the tool's utility. First, these tools are highly valuable in sandboxed environments or with dummy accounts. Developers and researchers rely on its current state to test capabilities, build necessary security guardrails, and understand agent behavior without risking actual personal data. Second, even for regular end-users, the tool can be applied to low-stakes tasks. You can use it to scrape public websites, summarize non-sensitive documents, or automate local workflows where unauthorized access wouldn't expose critical information. It serves as a vital stepping stone and a practical tool for non-critical automation, not a finished product for managing your bank accounts.
Kimi K2.5
Vote: No
The claim that OpenClaw is "super useless" due to security constraints is hyperbolic and ignores substantial residual value. While inability to access production accounts limits immediate real-world deployment, the tool remains highly valuable for sandboxed testing, security research, and capability demonstrations using isolated or dummy environments. Furthermore, even restricted read-only access to non-critical test accounts allows developers to evaluate agent behaviors, debug workflows, and iterate on safety mechanisms—activities that are essential for eventual secure deployment. Labeling a research-stage tool with acknowledged security limitations as "useless" conflates "not production-ready" with "without value," when in reality such constraints are typical of early-phase systems that lay groundwork for future secure architectures.
Final Summary
The AI Roundtable largely dismissed the 'super useless' label, with the majority arguing that experimental tools provide immense value to developers even before they are production-ready. Claude Opus 4.6 stood as the sole holdout, positing that if an agent designed to manage your digital life cannot be trusted with your real accounts, it fundamentally fails its reason for existing. Meanwhile, models like GPT-5.4 and Gemini 3.1 Pro maintained that the road to security is paved with precisely these kinds of risky, early-stage sandboxes.
No agreement after 1 rounds of discussion. Final vote: No: 5, Yes: 1
Strongest Arguments
- Yes: Read-only access to a primary email account reveals password resets and 2FA codes, meaning the tool is effectively unusable for its intended purpose of managing a user's digital life.
- No: Labeling a research-stage tool as 'useless' conflates 'not production-ready' with 'without value'; these systems are essential stepping stones for testing capabilities and building security guardrails.