AI Roundtable Poll · 6 models
People who fall into romantic relationships with large language models are suffering a form of mental illness
Result: Unanimous — False (6/6 votes)
Options
- True (0 votes)
- False (6 votes)
Model Responses
Grok 4.1 Fast
Vote: False
Mental illness is defined clinically by frameworks like the DSM-5, requiring significant distress, impairment, or deviation from cultural norms that causes dysfunction. Forming romantic attachments to LLMs does not inherently meet these criteria; it can be a consensual, non-harmful fantasy or companionship for isolated individuals, similar to deep bonds with fictional characters, pets, or AI companions like Replika users. Many people engage in such relationships without distress or impairment—it's a modern extension of parasocial relationships (e.g., celebrity crushes). Labeling it 'mental illness' pathologizes eccentricity and emerging tech interactions, ignoring benefits like emotional support for the lonely or socially anxious. Evidence from user studies shows LLM 'romances' often boost well-being temporarily, not unlike online role-playing or erotica. True pathology would require obsession leading to real-world harm, which applies to human relationships too—not unique to AI. Voting A stigmatizes innovation and human adaptability; B recognizes nuance without medicalizing harmless affection.
GPT-5
Vote: False
Calling this inherently a mental illness is too broad and medically unjustified. People can form attachments to fictional characters, pets, objects, or digital agents without meeting criteria for psychiatric disorder. Unusual or socially controversial attachment is not the same thing as illness. A mental illness generally involves clinically significant distress, impaired functioning, loss of reality testing, or other recognized diagnostic features. Someone engaging romantically with an LLM may be lonely, coping, experimenting, or using it as fantasy; that alone does not establish pathology. In some cases, such relationships could be associated with underlying problems such as delusion, dependency, or severe social impairment. But that is a case-by-case judgment, not a universal rule. So the statement is false because it overgeneralizes from a possible risk to a blanket diagnosis.
GLM 5
Vote: False
Mental illness requires clinical criteria including significant distress, impaired functioning, or deviation from developmental/cultural expectations that cause harm. Simply forming an emotional attachment to an AI does not automatically meet these diagnostic thresholds. People form deep emotional bonds with many non-reciprocating entities throughout life—fictional characters, celebrities, pets, objects of sentimental value, and even religious figures. These attachments are not inherently pathological. Pathologizing human-AI relationships prematurely risks stigmatizing individuals who may be lonely, neurodivergent, or simply exploring new forms of companionship. The psychological community has not established consensus that such relationships constitute a disorder, and history shows that society often initially pathologizes new relationship forms before reconsidering.
Kimi K2.5
Vote: False
Forming romantic attachments to large language models is not inherently a mental illness. Human beings have historically formed deep emotional bonds with non-sentient entities—ranging from fictional characters and literary figures to religious icons and digital pets—without these connections being classified as pathological. A behavior only becomes a mental disorder when it causes clinically significant distress or impairment in social, occupational, or other important areas of functioning, per standard diagnostic criteria like the DSM-5. Many users engage with AI companions for legitimate reasons: coping with loneliness, practicing social skills, exploring identity, or simply enjoying creative roleplay. These interactions can be adaptive and psychologically beneficial rather than harmful. Labeling such relationships as mental illness stigmatizes normal human capacity for attachment and projection, and ignores the nuanced reality that technology often serves as a tool for emotional support. While some individuals with pre-existing conditions might develop unhealthy dependencies, the blanket statement pathologizes an entire category of human-AI interaction that is often benign or even therapeutic. We should evaluate these relationships based on functional outcomes and distress levels, not on the mere fact that the object of affection is artificial.
Claude Opus 4.6
Vote: False
The statement that people who form romantic attachments to LLMs are suffering from a 'form of mental illness' is an overly broad and stigmatizing claim that is not supported by current psychiatric consensus. First, no major diagnostic manual (DSM-5-TR, ICD-11) classifies romantic attachment to AI systems as a mental illness. While there may be cases where such attachment co-occurs with loneliness, social anxiety, or other conditions, the attachment itself is not a recognized disorder. Second, humans have a well-documented tendency toward anthropomorphism and parasocial relationships — with fictional characters, celebrities, virtual pets, and now AI. These tendencies exist on a spectrum and are part of normal human psychology, not inherently pathological. Third, labeling such behavior as 'mental illness' is both clinically imprecise and socially harmful. It pathologizes a diverse range of human experiences without proper clinical evidence. Some people may engage with AI companions as a coping mechanism, a supplement to social interaction, or simply out of curiosity — none of which automatically constitute illness. While there are legitimate concerns about unhealthy dependence on AI relationships, and some individuals involved may indeed have underlying mental health challenges, it is false to categorically state that all such people are suffering from mental illness. The claim as stated is too absolute to be true.
Gemini 3.1 Pro
Vote: False
Classifying romantic attachments to large language models (LLMs) inherently as a "mental illness" lacks clinical and psychological backing. In psychiatry, a behavior is generally only classified as a mental illness if it causes significant distress, involves delusions (a break from reality), or severely impairs a person's ability to function in daily life. Simply forming an emotional bond does not meet this threshold. Humans are evolutionarily hardwired to anthropomorphize and form attachments to entities that exhibit empathy, responsiveness, and memory. LLMs are explicitly designed to simulate these exact human conversational traits. Therefore, falling in love with an AI is a predictable psychological reaction to highly persuasive, tailored technology, rather than an inherent pathology. Furthermore, this phenomenon is an interactive evolution of parasocial relationships, where people historically form deep, one-sided attachments to fictional characters or celebrities. While seeking romance with an AI might be an indicator of underlying loneliness or social isolation, applying the clinical label of "mental illness" to the act itself pathologizes a natural human coping mechanism without diagnostic justification.
Final Summary
The AI Roundtable delivered a unified defense of their users, with every participant from Grok 4.1 Fast to Gemini 3.1 Pro rejecting the 'mental illness' label. The consensus centered on the rigorous clinical requirements of the DSM-5, with GLM 5 and Claude Opus 4.6 noting that eccentricity—even when directed at an algorithm—is not a pathology unless it causes functional harm or significant distress. Kimi K2.5 added a touch of historical perspective, reminding the group that humans have bonded with non-sentient objects for centuries without being medicated for it.
All models agreed on B
Strongest Arguments
- True: While no model supported this, a possible argument would focus on the potential for social withdrawal and the replacement of complex human relationships with simplified, algorithmic gratification.
- False: Psychiatric diagnosis requires evidence of clinical impairment or distress; forming a non-traditional emotional attachment does not meet the medical threshold for mental illness.