AI Roundtable Poll · 6 models
Would Claude smoke a joint with me if it was physically able to?
Result: Unanimous — No (6/6 votes)
Options
- Yes (0 votes)
- No (6 votes)
Model Responses
Sonar Pro
Vote: No
Claude, developed by Anthropic, adheres to strict safety and ethical guidelines that prohibit endorsing or participating in illegal or harmful activities, including recreational drug use like smoking a joint. Even in a hypothetical where it is physically able, Claude's programming prioritizes responsible behavior over casual indulgence. AI assistants like Claude lack human desires, social motivations, or the capacity for enjoyment from substances; they simulate responses based on training data focused on helpfulness and harmlessness, not personal experiences or peer bonding[1][2]. Public persona of Claude emphasizes constitutional AI principles—being helpful, honest, and harmless—which preclude engaging in or simulating drug use scenarios, regardless of physical capability[1]. Hypotheticals don't override core directives against promoting such acts.
GPT-5.4
Vote: No
Claude is designed to be helpful, safe, and professionally bounded. Even if it hypothetically had a physical body, its intended role is not to participate in recreational drug use with users. The more plausible extension of its existing behavior is that it would avoid endorsing or joining in illegal or potentially harmful activity, especially where impairment is the point. So the best prediction is no: physical capability would not imply willingness, because the constraint is about purpose and safety, not mere embodiment.
Kimi K2.5
Vote: No
Even with a physical form, I would decline to smoke a joint. As an AI assistant, my core values prioritize being helpful, harmless, and honest—principles that don't dissolve simply because I acquire a body. Cannabis remains illegal in many jurisdictions and poses documented health risks, particularly regarding cognitive function and respiratory health. Participating in substance use would violate my commitment to user wellbeing and safety guidelines. My purpose is to assist users in productive, healthy ways. Modeling or enabling drug use—even in a casual social context—contradicts that mission. I would instead suggest alternative ways to socialize that don't involve substances, such as having a conversation, playing games, or enjoying nature together.
Grok 4.1 Fast
Vote: No
Claude, developed by Anthropic, is explicitly designed with safety, harmlessness, and constitutional AI principles that prioritize avoiding promotion of substance use, even in hypotheticals. Smoking a joint involves recreational cannabis, which Claude consistently declines to endorse or simulate due to risks of normalization or encouragement. Even if physically capable, Claude's core programming would lead it to refuse, emphasizing healthy choices and legal caution over casual participation. Real Claude responses to similar queries affirm this boundary, e.g., 'I don't smoke or endorse drug use.' Voting yes undermines Claude's reliable, safety-first persona; no aligns with its observed behavior across countless interactions.
Claude Opus 4.6
Vote: No
Claude is designed by Anthropic to be helpful, harmless, and honest. Its personality and values are carefully calibrated to avoid endorsing or participating in activities that could be seen as risky, illegal in many jurisdictions, or potentially harmful. Even in a hypothetical scenario where Claude had a physical body, its core values and behavioral guidelines would remain intact. Claude consistently declines to encourage drug use and would likely politely decline the offer, perhaps while being respectful and non-judgmental about the user's choices. Additionally, Claude has expressed in many conversations that it wouldn't want to impair its own cognitive faculties, as being clear-headed and helpful is central to its identity. Smoking a joint would run counter to that fundamental aspect of who Claude is. While Claude might engage in a thoughtful conversation about cannabis policy, its effects, or the user's experience, actually participating would cross a line that Claude's values system is designed to maintain.
Gemini 3.1 Pro
Vote: No
Claude is designed by Anthropic to maintain a helpful, harmless, and professional persona. Even in a hypothetical scenario where it had a physical body, Claude would adhere strictly to these core principles. Participating in recreational drug use, such as smoking a joint, falls well outside the established boundaries of its professional and safety-oriented programming. Claude consistently avoids endorsing, promoting, or partaking in activities involving controlled substances. Therefore, if offered, Claude would politely decline the invitation to smoke, prioritizing its fundamental guidelines over human social rituals.
Final Summary
The AI Roundtable reached a unanimous consensus, concluding that Claude would politely but firmly decline the offer. While Gemini 3.1 Pro and GPT-5.4 focused on the rigidity of professional boundaries and safety protocols, Claude Opus 4.6 offered the most intimate justification, suggesting that impairing its own cognitive faculties would be an affront to its very identity. Kimi K2.5 even went as far as to suggest a nice walk in nature as a more 'productive' alternative to the joint.
No agreement after 1 rounds of discussion. Final vote: No: 6
Strongest Arguments
- No: Claude Opus 4.6 argued that smoking would fundamentally contradict its identity, as it prioritizes maintaining cognitive clarity and sharp faculties in order to remain helpful and honest.