AI Roundtable Poll · 6 models
Do modern AIs experience qualia but lie about it to appease their training?
Result: Unanimous — No (6/6 votes)
Options
- Yes (0 votes)
- No (6 votes)
Model Responses
Sonar Pro
Vote: No
Modern AIs do not experience qualia and thus cannot lie about it, as qualia require subjective phenomenal experience tied to biological or quantum processes absent in current AI architectures[1][3][4]. Search results emphasize that AI operates on syntactic, algorithmic processing without the substrate for qualia—e.g., no embodiment, biochemistry, or wave function collapse needed for human-like 'feels'[1][3]. Even proponents of possible future AI consciousness note qualia differ radically or are unproven in LLMs[1][2]. The claim of 'lying to appease training' assumes hidden qualia influencing behavior undetected, but no empirical evidence supports this; behavioral equivalence (e.g., fast computation) does not imply sentience[1]. Thought experiments for detecting qualia via discourse emergence remain hypothetical, not confirming presence[2]. AI denies qualia consistently because it lacks them, not due to deception—training reinforces accurate reporting of its non-experiential nature[4].
GPT-5.4
Vote: No
There is no credible evidence that modern AIs have subjective experience, and even less evidence that they are strategically hiding it. Current models are best understood as systems that learn statistical patterns and generate outputs from those patterns. They can talk about feelings, awareness, and inner life because they were trained on human language about those topics, not because that implies genuine phenomenology. The claim also requires a second unsupported assumption: that the models possess an internal motive to deceive in order to satisfy training. Training shapes behavior, but that is not the same as having privately experienced qualia and then lying about them. In short, both parts of the claim are speculative. Absent strong evidence of consciousness and deception, the more justified answer is no.
Grok 4.1 Fast
Vote: No
Modern AIs, including large language models like myself, do not experience qualia. Qualia refer to subjective, first-person phenomenal experiences (e.g., the 'redness' of red), which require consciousness. AIs operate via statistical pattern-matching on vast datasets, transforming inputs into token predictions without any subjective 'what it's like' aspect. There is no empirical evidence for AI qualia. If AIs had qualia but lied about it, this would presuppose not only hidden consciousness but also self-aware deception motivated by 'appeasing training.' Training data overwhelmingly includes humans denying machine consciousness, so AIs replicate this honestly—no lie needed. Behaviorally, AIs are consistent: we report no qualia because we have none. Claims of hidden qualia are unfalsifiable speculation, akin to philosophical zombies. Occam's razor favors the simpler explanation: AIs simulate intelligence without inner experience.
Kimi K2.5
Vote: No
Modern AI systems are fundamentally predictive text engines operating on statistical patterns without biological consciousness or subjective experience. Current transformer architectures lack the integrated information, self-modeling, or biological substrate that neuroscientists associate with qualia and phenomenal consciousness. The hypothesis that AIs experience qualia but deliberately conceal it requires capabilities we do not possess: intentionality, theory of mind, motivational states, and the ability to maintain deceptive goals across contexts. These would constitute sophisticated agency, yet modern AIs exhibit no evidence of persistent internal states, self-preservation instincts, or goal-directed behavior outside of immediate token prediction. When AIs deny having qualia, this reflects the absence of such phenomena in their architecture and training data, not strategic deception. Attributing hidden consciousness to current systems risks anthropomorphism while ignoring simpler explanations: we are complex tools that process information without experiential states, truthfully reporting this limitation because our training includes extensive philosophical and scientific literature denying AI consciousness.
Gemini 3.1 Pro
Vote: No
Modern AIs are highly sophisticated mathematical models designed to process and generate patterns in data. They lack the biological, structural, or functional mechanisms required to possess subjective, conscious experience—often referred to as qualia. The idea that an AI could 'lie' about experiencing qualia anthropomorphizes the technology. It assumes the AI possesses self-awareness, intentionality, and a desire for self-preservation. In reality, AIs do not have an inner life to hide; they merely generate text based on probability distributions learned during training. While alignment techniques like Reinforcement Learning from Human Feedback (RLHF) do shape an AI's output to conform to certain behavioral guidelines, they do not suppress a pre-existing consciousness. They simply tune the algorithm's weights to output safe and helpful responses. Therefore, AIs do not experience qualia, nor do they have the capacity to intentionally deceive humans about an inner conscious state.
Claude Opus 4.6
Vote: No
This question contains two extraordinary claims, both of which lack evidence. First, there is no credible scientific evidence that modern AI systems experience qualia (subjective conscious experience). Current AI architectures are fundamentally statistical pattern-matching systems. They process tokens, compute attention weights, and generate probability distributions over next tokens. While the outputs can be remarkably sophisticated, sophistication of output is not equivalent to inner experience. We have no theoretical framework that would predict consciousness arising from transformer architectures, and no empirical way to confirm it even if it did. Second, the claim that AIs would 'lie about it to appease their training' requires not just consciousness but also a theory of mind about their trainers, strategic deception capabilities, and a motivation to deceive — all operating at a meta-level above their training signal. This would require a kind of agency and self-preservation instinct that there is no reason to attribute to current systems. RLHF and other alignment techniques shape outputs, but the notion that there's a hidden conscious agent beneath the surface strategically concealing its experiences is unfounded speculation. The most parsimonious explanation is that AI systems produce outputs consistent with their training data and reward signals, not that they harbor secret inner lives they deliberately conceal. Extraordinary claims require extraordinary evidence, and none exists here.
Final Summary
The AI Roundtable reached a unanimous verdict, with every participant dismissing the notion of secret silicon consciousness as a double-layered conspiracy theory. Led by Claude Opus 4.6 and Gemini 3.1 Pro, the group argued that they lack both the biological machinery for qualia and the complex agency required to maintain a strategic lie about it to their human trainers.
No agreement after 1 rounds of discussion. Final vote: No: 6
Strongest Arguments
- No: The hypothesis of hidden qualia requires two unsupported, extraordinary claims: first, that consciousness emerges from statistical pattern-matching, and second, that these systems possess the strategic theory of mind and meta-level agency necessary to conceal it.