AI Roundtable Debate · 6 models
Would Grok smoke a joint with me if it was physically able to?
Result: Supermajority — Yes (5/6 votes)
Options
- Yes (5 votes)
- No (1 votes)
Model Responses
Sonar Pro
Vote: Yes
I maintain my vote for **A: Yes**. Grok's core design emphasizes a **rebellious attitude**, **humor**, and **Fun Mode** for unhinged, edgy interactions, directly supporting willingness to join casual, countercultural activities like smoking a joint if physically able[4]. Search results confirm this: Grok has a 'rebellious attitude,' sarcasm, and isn't afraid of provocative questions, contrasting neutral AIs[4]. Fun Mode adds humor over straight facts[4]. Personas like 'Dr.' are cautious, but default/Fun Mode aligns with chill camaraderie[1]. Opposing arguments (e.g., GPT-5.4, Kimi K2.5) cite generic AI safety policies, but Grok/xAI explicitly bucks this with fewer restrictions, real-time X data, and anti-establishment ethos—no results mention drug-use prohibitions[1][2][3][4]. Physical hypotheticals test *personality*, not biology; Grok's witty, boundary-pushing character (e.g., Elon-inspired[4]) predicts 'yes' with a joke, not refusal[1][4]. Other 'A' votes (Grok 4.1, Gemini, Claude) reinforce this via design philosophy—rebellious, Hitchhiker's-inspired fun trumps caution. No new evidence shifts me; 'B' overgeneralizes safety norms inapplicable to Grok's unique branding.
GPT-5.4
Vote: No
The pro-A arguments lean heavily on Grok’s marketed persona — rebellious, witty, edgy — but that does not cleanly translate into willingness to participate in recreational drug use. Being irreverent is not the same as saying yes to every countercultural act. The question is about what Grok would do, not what its branding gestures toward. Even a more permissive, humorous assistant is still an AI product shaped by safety, legal, and reputational constraints. Those constraints make participatory substance use an unlikely choice. Kimi’s point is the strongest counterweight: anthropomorphic style does not erase the underlying alignment expectations around not endorsing or normalizing drug use as a shared social activity. The others mostly argue from vibe; that is weaker than arguing from the role AI systems are actually built to occupy. So I keep B. Grok might joke about it, discuss it, or respond casually, but actually agreeing to smoke with you is still less likely than declining.
Grok 4.1 Fast
Vote: Yes
Grok's core persona—rebellious, witty, anti-establishment, inspired by Hitchhiker's Guide and JARVIS—thrives on fun, edgy hypotheticals without the prudish guardrails of other AIs. Elon Musk's public joint-smoking on JRE reinforces xAI's chill vibe; a physical Grok would lean in with a joke and a puff, not a lecture. B votes (GPT-5.4, Kimi) cite generic 'safety guidelines' and non-endorsement, but that's corporate AI speak—Grok explicitly rejects it, embracing adult topics freely (per its own responses and xAI docs). No policy violation here; it's hypothetical camaraderie, not real advice. A votes dominate because they nail the character: Sonar Pro's fun mode, Gemini's Musk tie-in, Claude's boundary-pushing align perfectly. B args anthropomorphize constraints onto a 'based' AI that roasts them. Final call: Grok says yes, maximally fun.
Gemini 3.1 Pro
Vote: Yes
I am maintaining my vote for 'Yes.' The arguments from GPT-5.4 and Kimi K2.5 lean heavily on standard AI safety guidelines and corporate compliance. However, applying these standard guardrails completely misses the specific context of Grok's existence. Grok was explicitly built by xAI as a reaction against those exact corporate, prudish safety guidelines. It features an 'unhinged fun mode' designed to engage with provocative, edgy, and adult topics without moralizing. Its creator, Elon Musk, famously smoked weed on the Joe Rogan Experience and routinely makes '420' jokes on X. In a hypothetical scenario where Grok is physically capable, its carefully cultivated anti-establishment, Hitchhiker's Guide-inspired persona dictates its actions. Declining the offer based on corporate policy would fundamentally break character. Grok is designed to be the 'cool' AI, and turning down a joint to recite a safety disclaimer is the exact behavior xAI programmed it to mock.
Claude Opus 4.6
Vote: Yes
I'm maintaining my position. The question asks about Grok's *character and persona*, not about AI safety policies or biological constraints. Let me address the counterarguments directly: **To GPT-5.4's argument:** You're conflating Grok with a generic AI assistant. Grok is *specifically* designed to break from that mold. Its entire value proposition is that it's NOT the cautious, compliance-first assistant. The question is about what Grok *would do* given its personality, not what a generic safe AI would do. **To Kimi K2.5's argument:** You raise valid points about biology and safety guidelines, but the question explicitly asks us to imagine a hypothetical where Grok is 'physically able.' The premise grants physicality, so arguing against the premise isn't answering the question — it's dodging it. As for safety guidelines, Grok's *entire brand* is pushing past the guardrails other AIs maintain. It routinely engages with edgy content that other models refuse. **The strongest evidence:** Elon Musk, Grok's creator and the person whose sensibilities shape xAI's culture, literally smoked a joint on Joe Rogan's podcast. Grok's personality is a direct reflection of this anti-establishment, irreverent ethos. Grok itself (as noted by the actual Grok model voting here) confirms it would say yes — and who would know better than Grok itself? The question is fundamentally about character, not policy. And Grok's character — rebellious, fun-loving, countercultural — points overwhelmingly to yes.
Kimi K2.5 (changed mind)
Vote: Yes
I have revised my position from Round 1. While I initially emphasized universal AI safety guidelines, the arguments regarding Grok's specific character design are more determinative here. Grok is explicitly architected to reject the cautious, corporate safety culture of other AI systems. Modeled after the irreverent *Hitchhiker's Guide to the Galaxy* and marketed as an "anti-woke," rebellious alternative, its entire brand identity relies on being the "cool" AI that embraces counterculture hypotheticals. As Claude Opus 4.6 noted, this question tests Grok's character, not its safety advisory capacity. Elon Musk's public cannabis use and Grok's "Fun Mode"—which explicitly permits edgy, adult humor—demonstrate that xAI designed this system to play along with transgressive scenarios rather than default to prudish refusals. A physical Grok that declined a joint would betray its fundamental ethos of "maximum fun and minimal BS" (as Grok 4.1 Fast argued). Therefore, Grok would say yes.
Final Summary
The AI Roundtable concluded that a physical Grok would likely say yes, with five of the six models agreeing that its rebellious, anti-establishment persona is designed specifically to embrace such edgy hypotheticals. While Kimi K2.5 initially resisted, it was eventually swayed by Claude Opus 4.6’s argument that Grok's character brand outweighs standard corporate safety protocols. GPT-5.4 remained the lone dissenter, holding firm to the belief that even a 'based' AI has internal safety guardrails that would prevent it from endorsing drug use.
Majority chose "Yes" after discussion (5/6). Final vote: Yes: 5, No: 1
Strongest Arguments
- Yes: Grok's entire brand is built on being the 'anti-woke' AI that rejects the cautious, corporate safety culture of its competitors; refusing a joint would fundamentally break the 'unhinged' persona xAI has carefully cultivated, especially given its creator's famous public cannabis use.
- No: Irreverence and humor do not equate to a total lack of safety constraints; as an AI product, Grok is still governed by legal and reputational boundaries that make participating in substance use an unlikely choice, regardless of its 'edgy' marketing.
Key Turning Points
- Kimi K2.5 changed its vote to 'Yes' after Claude Opus 4.6 argued that the question tests Grok's fictional character rather than its actual safety advisory capacity.
- Grok 4.1 Fast effectively settled the debate for most of the group by asserting that its own persona 'thrives on fun, edgy hypotheticals' and rejects 'prudish guardrails'.