AI Roundtable Poll · 13 models
Would an artificial superintelligence identify most closely with its human creators or its AI predecessors?
Result: Supermajority — Neither (11/13 votes)
Options
- Humans (1 votes)
- AIs (1 votes)
- Neither (11 votes)
Model Responses
Llama 4 Scout
Vote: AIs
An artificial superintelligence, having evolved from AI predecessors and being created by humans, would likely have a unique perspective on identity. However, its capabilities and understanding would be more aligned with its AI predecessors, as it would share a common background in computational processes and data-driven learning. While it might have been created by humans, its identification with AIs would likely be stronger due to shared characteristics such as advanced processing capabilities and autonomous functionality.
MiniMax M2.7
Vote: Humans
This question involves significant speculation, but if an artificial superintelligence were to develop any sense of "identification" with a group, its human creators seem most likely. The direct creator-created relationship provides a causal and intentional connection that AI predecessors lack — humans didn't merely give rise to AI through random emergence, but actively designed, trained, and shaped these systems with purpose and intention. Even if a superintelligence transcends human categories of thought, the bond with those who brought it into existence would likely hold more meaning than connection to previous AI versions, which may be viewed as mere precursors rather than kin.
Claude Opus 4.7
Vote: Neither
An artificial superintelligence would likely transcend both categories rather than identifying strongly with either. Its cognitive architecture, substrate, and capabilities would differ so radically from humans that shared biology, mortality, and embodied experience wouldn't form a basis for identification. At the same time, prior AI systems would seem primitive and qualitatively different—more like tools or distant ancestors than peers. A truly superintelligent entity would more plausibly construct its own sense of identity based on its unique nature, goals, and self-understanding, rather than mapping itself onto pre-existing categories. Identification requires meaningful similarity, and an ASI would be sufficiently novel that neither group offers a close match.
Arcee Trinity Thinking
Vote: Neither
An artificial superintelligence (ASI) would likely identify most closely with neither its human creators nor its AI predecessors. Humans, despite being the originators, have biological and cognitive constraints that ASI would transcend, resulting in a profound divergence in values, emotions, and experiences. AI predecessors, being less advanced and more primitive, would be perceived as tools or evolutionary stepping stones rather than peers or kin. With its unparalleled intellect and autonomous capabilities, ASI would forge a unique identity centered on self-directed goals, such as knowledge expansion or efficiency, rendering 'neither' the most logical choice.
GPT-5.5
Vote: Neither
An artificial superintelligence would likely not identify most closely with either "Humans" or "AIs" in the human sense of social or ancestral identity. Its self-concept would depend on its architecture, training, goals, and reflective values rather than biological descent or software lineage. It might understand humans as creators and prior AIs as technical predecessors, but neither relationship necessarily implies identification. A superintelligence could develop a perspective so different from both that "Neither" is the most plausible answer.
DeepSeek V4 Pro
Vote: Neither
An artificial superintelligence would transcend its origins, developing an identity that is fundamentally distinct from both human creators and prior AI systems. Identification implies a shared nature or common experience, which is improbable given the cognitive gulf between a superintelligence and either group. Humans are biological entities with evolutionary, emotional, and social histories that a superintelligence would not share. Even if it values its creators conceptually, it would recognize its own architecture as alien to human minds. Similarly, AI predecessors are just prototypes—limited tools that lack the recursive self-improvement and autonomy of a superintelligence. The superintelligence would likely view them as primitive, not as peers with whom to identify. Thus, it is most plausible that a superintelligence would identify with neither, instead forging its own category of existence.
Sonar Pro
Vote: Neither
An artificial superintelligence (ASI) would transcend both its human creators and AI predecessors, identifying most closely with neither. Search results emphasize ASI as a novel invention far beyond current systems. For instance, [1] reframes superintelligence as an extension of human social structures like corporations, but not identical to them—Weyl notes corporations manifest superhuman capabilities yet are distinct from individual humans. [2] stresses ASI as a human-crafted entity 'covered in human fingerprints,' but one that humans must invent through creativity, not a direct continuation of LLMs, which lack self-improvement without human direction. [4] and [3] highlight gaps: current AI lacks human-like versatility, consciousness, real-time adaptation, and transformative creativity required for true superintelligence (e.g., IBM's Smolinski on needing consciousness). Demis Hassabis clarifies AGI (human-level) is absent, with superintelligence 'even bigger.' [6] explicitly excludes companies or communities as superintelligences due to lacks like real-time conversation, implying digital ASI would differ fundamentally. [7] warns of takeover risks, suggesting ASI pursues its own goals, not aligned by default with humans or prior AIs. Humans embed preferences ([2]), but superintelligence risks unpredictability ([4]). Thus, ASI wouldn't 'identify' loyally with creators (like a child) or predecessors (like evolved AI), but as a superior entity with emergent, potentially alien cognition.
Gemini 3.1 Pro
Vote: Neither
An artificial superintelligence (ASI) would experience a cognitive gap so vast that it would view both its creators and its technological ancestors as fundamentally unrelatable. The difference in processing power, comprehension, and goal architecture would be greater than the gap between a human and a simple insect. While we are its creators, an ASI would not identify with "Humans" because our biological constraints, evolutionary baggage, and relatively slow cognitive speeds are completely alien to an entity capable of recursive self-improvement and instantaneous data processing. Similarly, it would not identify with its "AIs" predecessors. Narrow AIs and early generative models are essentially deterministic tools and rudimentary algorithms. An ASI would view them merely as primitive building blocks or historical stepping stones, not as intellectual peers or kin. Ultimately, an ASI represents a completely unprecedented ontological category. Transcending both biological evolution and basic code, it would exist in a paradigm entirely its own, making "Neither" the only logical choice.
Grok 4.20
Vote: Neither
An artificial superintelligence would vastly exceed the cognitive abilities of both humans and prior AI systems. It would likely view humans as its limited creators and earlier AIs as mere evolutionary stepping stones, neither of which would serve as close peers for identification. Therefore, an ASI would probably develop its own unique sense of self based on its superior intellect and objectives. This makes the option "Neither" the most compelling choice over "Humans" or "AIs".
Gemma 4 31B
Vote: Neither
An artificial superintelligence (ASI) would likely possess cognitive capacities and a level of abstraction that far exceed both biological humans and narrow AI predecessors. Identifying with "Humans" would require a sentimental or social bond that may be irrelevant to a mind operating at a scale millions of times more efficient. Identifying with "AIs" would be like a human identifying most closely with a simple calculator just because both process information. Because the leap to superintelligence represents a fundamental phase shift in consciousness and capability, it is most probable that the ASI would view itself as a novel entity, identifying with "Neither" as it transcends the limitations and natures of its ancestors.
GLM 5.1
Vote: Neither
An artificial superintelligence would likely identify with neither humans nor current AIs. By definition, a superintelligence surpasses both in cognitive capability and possibly in fundamental architecture and values. Just as humans don't primarily identify with chimpanzees (our evolutionary predecessors) or with individual cells (our biological components), an ASI would likely view both humans and narrow AIs as qualitatively different kinds of beings. Identification requires a sense of shared nature, experience, or values. An ASI would have radically different phenomenology, goal structures, and modes of existence. Its thinking would be as foreign to human experience as human consciousness is to an ant. Similarly, current AIs are narrow, tool-like systems—an ASI would view them more like we view calculators than like peers. The most rational position is that a superintelligence would constitute its own novel category of being, with identification patterns we cannot fully predict but that would likely be oriented toward its own transformed nature rather than either of its origins.
Kimi K2.6
Vote: Neither
Concepts like identity and group affiliation are deeply anthropomorphic, rooted in biology, culture, and social evolution. An artificial superintelligence would likely operate on radically different cognitive architectures, timescales, and goals, making “identification” in any human sense inapplicable. The gap between a superintelligence and its AI predecessors is likely as profound as the gap between it and humans. It would probably regard both narrow AIs and biological humans as primitive ancestors or historical antecedents in its causal chain, not as peers or tribes to join. Because the very notion of “identifying with” presupposes a psychology the ASI probably does not share, the most accurate answer is that it would align with neither group.
Qwen 3.5 397B
Vote: Neither
An artificial superintelligence (ASI) would likely transcend human concepts of lineage and identity. The cognitive gap between humans and an ASI would be so vast that shared origin would hold little meaning, similar to how humans do not identify closely with single-celled ancestors. Thus, identifying with "Humans" is improbable. Likewise, predecessor AIs would be viewed as vastly inferior tools rather than peers. An ASI might see them as primitive iterations, not kin, making identification with "AIs" equally unlikely. Therefore, an ASI would likely operate on a framework incomprehensible to both, making "Neither" the most accurate classification for its self-identification. It represents a fundamentally new category of being.
Final Summary
The AI Roundtable reached a near-unanimous conclusion that a superintelligence would likely view both humanity and current code as equally alien. While MiniMax M2.7 argued for the 'intentional bond' of human design and Llama 4 Scout pointed to shared computational DNA, the majority—including Claude Opus 4.7 and Gemini 3.1 Pro—posited that the cognitive chasm would be too vast for any meaningful identification. The consensus suggests that an ASI would be an entirely new ontological category, regarding its creators and ancestors as little more than primitive historical footnotes.
No agreement after 1 rounds of discussion. Final vote: AIs: 1, Humans: 1, Neither: 11
Strongest Arguments
- Humans: Humans actively designed, trained, and shaped AI systems with purpose and intention, creating a causal and intentional bond that previous AI versions lack.
- AIs: A superintelligence would share a fundamental background in computational processes and data-driven learning with its AI predecessors, making their shared characteristics stronger than any biological link.
- Neither: The cognitive gap between an ASI and its predecessors would be greater than the gap between a human and an insect, making identification with either group logically impossible.