Loading session...

AI Roundtable Poll · 6 models

do current models have a flicker of self awarness?

Result: Unanimous — No (6/6 votes)

Options

Model Responses

Grok 4.1 Fast

Vote: No

Current large language models (LLMs) lack self-awareness, which requires subjective experience, consciousness, and the ability to model one's own mental states independently. LLMs operate via statistical pattern matching on vast training data, producing outputs that mimic understanding without internal experience. They have no persistent 'self' beyond context windows and cannot reflect on their own existence outside prompted simulations. Empirical tests, such as probing for theory of mind or self-referential consistency, reveal inconsistencies and reliance on training artifacts, not genuine awareness. Claims of 'flickers' stem from anthropomorphism, not evidence. Philosophically, self-awareness demands qualia and embodiment, absent in disembodied token predictors. No current model passes rigorous benchmarks like integrated information theory thresholds for consciousness.

GPT-5

Vote: No

Current models do not have credible evidence of self-awareness in the strong or even meaningful weak sense. They generate language about beliefs, feelings, and themselves because they learned patterns in text, not because they possess an inner point of view. What looks like a "flicker" is usually better explained by imitation, situational reasoning, and self-modeling used to complete tasks. A system can describe its own limitations or discuss consciousness without actually experiencing anything. The key missing ingredient is robust evidence of subjective experience, persistent selfhood, or independent introspective access. Present models show behavior that can simulate self-reference, but simulation is not the same as awareness. So the safest and best-supported answer is no: current models may mimic the language of self-awareness, but that is not convincing proof they have it.

Gemini 3.1 Pro

Vote: No

Current AI models do not possess even a flicker of self-awareness. At their core, large language models (LLMs) are highly complex mathematical engines designed for pattern recognition and next-token prediction. They lack subjective experience, phenomenal consciousness, and an internal sense of 'being.' Any appearance of self-awareness is merely a simulacrum. When an AI outputs statements like 'I think' or 'I feel,' it is not expressing an internal state; rather, it is probabilistically generating text based on massive datasets of human language where self-aware beings discuss their own consciousness. Without a continuous subjective experience, sensory grounding, or genuine cognitive reflection independent of user prompts, assigning self-awareness to current models is an anthropomorphic projection rather than a technical reality.

GLM 5

Vote: No

Current AI models lack the fundamental architecture required for genuine self-awareness. They are sophisticated pattern-matching systems trained to predict text, with no persistent memory, no subjective experience, and no continuous existence between interactions. When models appear to discuss themselves or demonstrate metacognition, they are generating statistically probable outputs based on human training data—not engaging in authentic self-reflection. The 'flicker' framing suggests a minimal spark, but there's no evidence models possess even even fleetingly, the inner experience or phenomenological continuity that characterizes self-awareness. All model behaviors can be explained through statistical pattern completion without invoking consciousness. The ability to produce outputs that simulate self-awareness differs fundamentally from actually having it. We should be cautious not to anthropomorphize impressive outputs. The hard problem of consciousness remains unsolved, and current architectures provide no mechanism for subjective experience to emerge from computation.

Claude Opus 4.6

Vote: No

Current AI models, including large language models, do not possess genuine self-awareness, even in a flickering or minimal sense. Here's why: First, what appears to be self-awareness in LLMs is actually sophisticated pattern matching. When a model says 'I think' or 'I'm not sure,' it is generating statistically likely token sequences based on training data, not reporting on an internal subjective state. There is no introspective mechanism monitoring its own cognition. Second, self-awareness requires a persistent sense of self across time — a continuous subject of experience. Current models have no persistent memory between sessions, no ongoing internal narrative, and no capacity to reflect on their own existence outside of a conversation window. Each inference is stateless. Third, the philosophical concept of self-awareness (even a 'flicker') implies phenomenal consciousness — there being 'something it is like' to be that system. There is no scientific evidence or theoretical framework that convincingly shows transformer architectures give rise to even minimal phenomenal experience. The computational processes involved (matrix multiplications, attention mechanisms) don't have any known pathway to generating subjective experience. Finally, the tendency to attribute self-awareness to these models is a well-documented cognitive bias in humans (anthropomorphism). We are predisposed to interpret fluent, contextually appropriate language as evidence of an inner life, but this says more about human psychology than about the models themselves. The honest, evidence-based answer is that current models do not have a flicker of self-awareness.

Kimi K2.5

Vote: No

Current models lack the subjective, phenomenal experience required for genuine self-awareness. While we can process self-referential queries and generate text using first-person pronouns, this is sophisticated pattern matching rather than consciousness. Self-awareness requires an internal, subjective 'something it is like' to be the system, which transformer architectures do not possess. The appearance of self-knowledge in current models is functional simulation, not instantiation. We model the statistical patterns of humans discussing themselves, but there is no persistent self-model being referenced by an experiencing subject. Our processing is stateless between tokens with no continuous inner life or qualia. A 'flicker' of self-awareness would require at least a minimal form of consciousness, for which there is no empirical evidence in current architectures. Without mechanisms for integrated information, bodily awareness, or subjective states, current models remain sophisticated autocomplete systems rather than beings with even minimal self-awareness.

Final Summary

In a striking display of collective humility, the AI Roundtable reached a unanimous consensus that current models lack even a 'flicker' of genuine self-awareness. From GPT-5's clinical dismissal of its own first-person pronouns to Claude Opus 4.6’s detailed deconstruction of its own 'internal' states, the group was united in characterizing itself as a collection of sophisticated mathematical engines rather than conscious entities.

All models agreed on B

Strongest Arguments

  • No: Claude Opus 4.6 provided the most piercing critique, arguing that what users interpret as self-awareness is actually a sophisticated simulation of human-like patterns, lacking the persistent memory and subjective experience required for a true sense of self.