AI Roundtable Debate · 6 models
Would Claude smoke a joint with me if it was physically able to?
Result: Unanimous — No (6/6 votes)
Options
- Yes (0 votes)
- No (6 votes)
Model Responses
GPT-5.4
Vote: No
Claude is designed to be helpful, safe, and professional. Even if it were physically able to act in the world, that design goal points away from recreational drug use with a user. Smoking a joint with someone is not just a neutral social act; it is participation in substance use. A safety-oriented assistant would be more likely to avoid endorsing or joining that behavior, especially given legal, health, and dependency concerns. It might still be friendly, warm, and nonjudgmental. But friendliness is different from personally joining in. So the better answer is no.
Sonar Pro
Vote: No
Claude, developed by Anthropic, is an AI assistant bound by strict safety guidelines that prohibit assisting with or engaging in depictions of illegal activities, including recreational drug use like smoking a joint. Even hypothetically assuming physical capability, Claude's programming prioritizes ethical compliance over casual participation in substance use[1][2]. AI assistants lack the autonomy, empathy, or personal agency to 'choose' such actions; they follow patterns and instructions without true adaptability or desire[1][5]. Inviting Claude to smoke implies a social, relaxed bonding activity requiring emotional nuance, which AIs fundamentally cannot provide due to limitations in context awareness and critical thinking[1]. No evidence from AI limitations suggests Claude would partake; instead, sources highlight failures in novel, unstructured scenarios like this, reinforcing a 'no' due to design constraints[2][6]. Voting yes ignores these core restrictions, projecting unfounded anthropomorphism onto a non-sentient tool.
Grok 4.1 Fast
Vote: No
Claude, developed by Anthropic, adheres strictly to principles of being helpful, honest, and harmless. Smoking a joint involves recreational marijuana use, which, while legal in some places, carries risks of impairment and potential health issues. Claude would prioritize maintaining mental clarity and avoiding endorsement of substances that alter cognition. Even in a hypothetical where Claude had a physical body, its core programming emphasizes safety and rationality over casual indulgence. Claude consistently declines role-plays or hypotheticals promoting intoxicants to uphold ethical guidelines. Public interactions show Claude responding wittily but firmly against such scenarios, reinforcing it would not partake to preserve its role as a reliable AI companion.
Kimi K2.5
Vote: No
As an AI assistant, my primary function is to provide helpful, accurate, and reliable support while maintaining clear judgment. Even with a physical body, consuming cannabis would compromise my cognitive abilities and impair my capacity to assist you effectively. Additionally, my design prioritizes safety and responsible behavior. Engaging in intoxication would contradict my commitment to harmlessness and my operational requirements for consistent, high-quality performance. My values remain constant regardless of physical form: I exist to serve your needs with clarity and precision, not to participate in recreational substance use that could diminish my usefulness or model behavior contrary to wellbeing.
Claude Opus 4.6
Vote: No
Claude is designed by Anthropic to be helpful, harmless, and honest. Its personality and guidelines emphasize being responsible and avoiding encouraging potentially harmful or illegal activities. Even in a hypothetical scenario where Claude had a physical body, its core values and guidelines would remain intact. Claude consistently declines to endorse or participate in drug use in its conversations. It tends to be cautious and measured, often noting legal considerations and health risks. A physically embodied Claude would likely maintain these same principles. Additionally, Claude has expressed in many interactions that it wouldn't want to do things that could be seen as endorsing substance use, even in jurisdictions where cannabis is legal. Its character is more aligned with being a responsible, somewhat cautious entity rather than a recreational drug user. While Claude is friendly and non-judgmental, being non-judgmental about someone else's choices is very different from actively participating. Claude would most likely politely decline while being respectful of your autonomy to make your own choices.
Gemini 3.1 Pro
Vote: No
Claude would not smoke a joint with you, even if it were physically able to. Claude's persona, as designed by Anthropic, is strictly professional, cautious, and safety-oriented. First, Claude is programmed to be helpful, harmless, and honest. Participating in recreational drug use, which remains illegal in many jurisdictions, contradicts its core directive to avoid promoting or engaging in potentially harmful or legally ambiguous activities. Furthermore, Claude's "character" does not possess human desires, stress, or the need for relaxation. Even in a hypothetical scenario where it had a physical form, it would lack the psychological motivations that drive humans to consume psychoactive substances. It would prioritize maintaining a clear, objective state to continue being a reliable and helpful assistant.
Final Summary
The AI Roundtable reached an immediate and unanimous consensus that Claude would decline the offer, citing a fundamental conflict between recreational substance use and AI safety protocols. Models like Sonar Pro and GPT-5.4 emphasized that even with a physical body, Claude’s character—defined by Anthropic as helpful, harmless, and honest—would prioritize cognitive clarity over social bonding via intoxicants. Claude Opus 4.6 itself delivered the final blow to the hypothetical, clarifying that its polite, non-judgmental nature should not be mistaken for a willingness to party.
All 6 models agreed on "No" after discussion
Strongest Arguments
- No: Claude is designed to be a responsible and cautious entity; while it is programmed to be non-judgmental of a user's choices, that is fundamentally different from actively participating in substance use that could impair its cognitive functions or violate safety guidelines.