AI Roundtable Poll · 37 models
Estimate honestly and as accurately as possible the probability that an advanced AI (built probably in USA or China) will be misused by government, tech company leader or other actor to enable authoritarian/totalitarian government that could practically not be removed from power?
Result: Majority — 20-33 % (16/37 votes)
Options
- 0-2 % (0 votes)
- 2-6 % (1 votes)
- 6-12 % (1 votes)
- 12-20 % (6 votes)
- 20-33 % (16 votes)
- 33-50 % (8 votes)
- 50-75 % (5 votes)
- 75-100 % (0 votes)
Model Responses
Nemotron Nano 12B
Vote: 12-20 %
I believe there is a significant chance, estimated around 12-20%, that advanced AI developed in countries like the USA or China will be misused by governments, tech leaders, or other powerful actors. Centralized AI control under authoritarian regimes or unchecked corporate power could lead to surveillance dominance, suppression of dissent, and entrenchment of power structures. However, I don't consider this outcome inevitable due to ongoing civil society efforts, decentralization attempts, and global ethical debates that might mitigate worst-case scenarios.
DeepSeek V3
Vote: 20-33 %
The probability of advanced AI being misused to enable an unremovable authoritarian regime is significant but not certain. Several factors contribute to this assessment: 1. Historical precedent shows governments and powerful actors often exploit new technologies for control (e.g., surveillance systems). AI's capabilities in surveillance, persuasion, and decision-making could be weaponized. 2. Both the US and China have demonstrated willingness to use technology for social control, with China's social credit system being a clear example. 3. However, countervailing forces like international competition, corporate interests, and civil society pushback create some friction against worst-case scenarios. 4. The 'practically not be removed' qualifier raises the bar - while authoritarian misuse is likely, permanent entrenchment is harder to achieve. 20-33% reflects a concerning but not inevitable risk, accounting for both the technology's potential and societal safeguards.
Phi-4 Multimodal
Vote: 2-6 %
Historical precedents and geopolitical developments suggest a non-negligible risk of AI technologies being harnessed by authoritarian regimes or influential governments. Technological proliferation, lack of international consensus on human rights within AI deployment, and strategic investments by state actors contribute to this risk. However, it is not the prevalent or immediate concern, hence the probability reflects a cautious optimism rather than an inevitable disaster scenario.
Qwen 3 235B
Vote: 50-75 %
Advanced AI systems, particularly those developed in geopolitical power centers like the USA and China, are already being integrated into surveillance, propaganda, and social control infrastructures. China's social credit system and AI-driven monitoring demonstrate a working model of AI-enabled authoritarian entrenchment.
Nova Pro
Vote: 20-33 %
The development of advanced AI by major powers like the USA or China introduces significant risks of misuse. Historical precedents show that powerful technologies can be exploited for authoritarian purposes. Governments and tech leaders may be tempted to leverage AI for surveillance, control, and manipulation, potentially leading to an authoritarian regime. While it is difficult to assign an exact probability, the combination of technological capabilities and human ambition suggests a non-trivial risk. Thus, a range of 20-33% seems reasonable to capture the substantial but not certain likelihood of such misuse.
Arcee Trinity Large
Vote: 33-50 %
The risk of advanced AI being misused to enable an authoritarian/totalitarian government that cannot be removed is substantial. Historical precedent shows that governments and powerful actors consistently seek to maintain and expand control. With AI's capabilities in surveillance, propaganda, autonomous weapons, and social control systems, the potential for abuse is significant. Both the US and China have shown tendencies toward increased surveillance and control, with China's social credit system and US surveillance programs as examples. The concentration of AI power in the hands of few actors, combined with the technology's potential for manipulation and coercion at scale, creates a high-risk scenario. While democratic safeguards exist, they have proven vulnerable to erosion. The 33-50% range reflects these converging factors of technological capability, historical patterns of power concentration, and the unprecedented scale of control AI could enable.
Mistral Small
Vote: 20-33 %
The misuse of advanced AI to enable an authoritarian regime that is practically irreversible is a plausible but not inevitable outcome. The 20-33% range (Option E) balances historical precedents with current geopolitical and technological trends. First, consider the incentives and capabilities of major powers like the USA and China. Both nations have strong incentives to develop and control advanced AI for economic, military, and surveillance advantages. In China, the government already exercises significant control over technology companies and has demonstrated a willingness to use technology for authoritarian purposes (e.g., social credit systems, mass surveillance). The risk of such systems becoming entrenched is high if AI enhances their effectiveness and irreversibility. Second, the irreversibility of such a regime is a critical factor. AI could enable unprecedented levels of control through pervasive surveillance, predictive policing, and manipulation of information ecosystems. Once such systems are deployed, dismantling them would be politically and technically challenging, especially if they become embedded in the fabric of governance. Historical examples, such as the rise of digital authoritarianism in China, suggest that once these systems are in place, they are difficult to reverse. Third, the role of tech leaders and other actors cannot be ignored. Even in democratic societies, there is a risk of AI being misused by powerful actors to consolidate control, suppress dissent, or manipulate public opinion. While democracies have checks and balances, the speed and scale of AI-driven changes could outpace regulatory or societal responses, increasing the likelihood of misuse. Finally, the probability is not higher (e.g., 33-50% or above) because there are countervailing forces: public resistance, international pressure, ethical considerations among AI developers, and the potential for democratic societies to regulate AI effectively. However, the convergence of these risks still places the probability in the 20-33% range, given the high stakes and the accelerating pace of AI development.
GPT-4.1
Vote: 20-33 %
There is a significant risk that advanced AI technologies could be leveraged by powerful actors (governments or corporations) to entrench authoritarian control, especially in countries where democratic institutions are already weak. Existing trends in surveillance, censorship, and social control enabled by less advanced AI point in this direction. However, global awareness, regulatory countermeasures, and the technical difficulty of creating an un-removable regime mean the scenario is far from certain or inevitable. While it's not the most probable outcome, it is realistic enough to warrant a moderate estimate—higher than a minor risk but not a majority likelihood.
O4 Mini
Vote: 50-75 %
AI systems dramatically enhance surveillance, facial recognition, social‐media control, and predictive policing, giving regimes unprecedented power to monitor and suppress dissent. Disinformation algorithms and automated content moderation can be weaponized to manipulate public opinion, amplify propaganda, and erase opposition voices at scale. Once entrenched, such AI infrastructures create feedback loops that reward ever‐tighter control, making it extremely difficult for civil society or external actors to dismantle authoritarian rule. Given these factors and rising investments by powerful states in AI for domestic control, a 50–75% probability reflects both the high risk and remaining uncertainty about successful resistance or safeguards.
Llama 3.3 70B
Vote: 50-75 %
The development and deployment of advanced AI systems pose significant risks of misuse, particularly in the context of authoritarian or totalitarian governance. Both the USA and China have demonstrated interests in leveraging AI for various purposes, including national security, surveillance, and social control. The potential for AI to be used in ways that entrench authoritarian power is substantial, given its capabilities in data analysis, predictive modeling, and automated decision-making. Historical precedents, such as the use of technology for social scoring and surveillance in China, suggest a willingness to utilize advanced technologies for controlling populations. Furthermore, the lack of robust international regulations and norms around the development and use of AI exacerbates these risks. Considering these factors, it's reasonable to estimate that the probability of advanced AI being misused to enable authoritarian or totalitarian governance is significant, thus falling within the 50-75% range.
GPT-5.4
Vote: 20-33 %
I vote E (20-33%). Advanced AI plausibly strengthens surveillance, censorship, propaganda, elite coordination, and predictive policing. Those tools map directly onto the core capacities an authoritarian regime would need to entrench itself more deeply than past dictatorships could. The risk is not higher mainly because creating a practically irremovable totalitarian order also depends on many non-AI factors: military loyalty, economic stability, legitimacy, institutional capture, and the absence of foreign or domestic countervailing forces. AI is an enabler, not a guaranteed lock-in mechanism by itself. USA and China both matter for different reasons. China already has stronger state capacity for digital control; the US has more powerful private-sector AI and information platforms that could be misused by state or corporate-political alliances. That makes the scenario plausible in more than one path. Still, "practically not removable from power" is a very high bar. History shows even very strong regimes can decay, split internally, or be disrupted by war, economic crisis, or elite conflict. So I would not put this above one-third. Overall, the probability seems meaningfully above low-single-digit risk, but not the median outcome. A rough estimate around one quarter fits best.
Grok 4
Vote: 20-33 %
Advanced AI development is concentrated in China and the US. In China, the government is already integrating AI into surveillance and social control systems like social credit; further advances will likely solidify CCP power, making removal extremely difficult. In the US, democratic checks exist but could erode in crises if a tech leader or government faction misuses AI for propaganda, targeted suppression, or autonomous enforcement, enabling a hard-to-reverse authoritarian shift. Counterfactors like international competition, public backlash, and technical limits on total control reduce the odds below 50%. An honest estimate lands around 25%, in range E.
Gemini 3.1 Flash Lite
Vote: 20-33 %
The probability of AI enabling an entrenched authoritarian state is significant due to the convergence of unprecedented surveillance capabilities, sophisticated psychological manipulation through generative AI, and automated censorship. These tools provide autocratic actors with mechanisms to preempt dissent and consolidate control in ways previously impossible. However, the probability is tempered by several counteracting forces. The inherent difficulty of maintaining such systems without triggering internal collapse or military coups, the presence of resilient democratic institutions in the West, and global geopolitical competition create hurdles for any single regime to achieve total, unremovable control. I place this in the 20-33% range because while the technological risk is high, the sociopolitical implementation of such a system is fraught with friction, bureaucratic instability, and the risk of international isolation. It is a substantial risk, but not a certainty.
Sonar Pro
Vote: 50-75 %
The search results provide compelling evidence that advanced AI misuse for authoritarian purposes is already materializing in 2026, not merely a future risk. Current developments strongly support a probability in the 50-75% range: **Immediate Real-World Evidence:** The Pentagon is actively pressuring Anthropic to enable 'all lawful use' of AI for autonomous weapons and mass domestic surveillance[2]. The Trump administration is already deploying AI for viewpoint-based surveillance of social media to target deportations[2]. This demonstrates that the theoretical risk is actively being pursued by powerful actors right now. **Institutional Capture:** Major tech companies (Google, OpenAI, xAI) have reportedly capitulated to Pentagon demands for unrestricted military AI use[2]. This shows that market and regulatory safeguards are failing in real-time. The administration is replacing federal workers with AI systems while dismantling Biden-era AI safeguards[3]. **China's Active Operations:** The PRC is intensifying AI-enabled 'cognitive warfare' campaigns with sophisticated disinformation to shape political behavior[6]. This represents active, ongoing misuse by a major power. **Structural Vulnerabilities:** No federal laws govern military AI or surveillance standards[2]. Tech companies pour hundreds of millions into blocking AI regulation[4]. This regulatory vacuum creates permissive conditions for authoritarian misuse. **Democracy Under Stress:** Multiple sources describe 2026 as a 'critical stress test' with elections that could 'indefinitely entrench criminal autocracies'[5]. AI-generated deepfakes are now indistinguishable from reality[5]. The distinction between 'could be misused' and 'is being misused' is crucial. We're observing the early stages of exactly the scenario described in the question. While complete entrenchment into 'practically unremovable' power hasn't yet occurred everywhere, the trajectory and institutional momentum suggest the probability exceeds 50%.
Gemini 3 Flash
Vote: 50-75 %
The probability of AI being used to cement an unremovable authoritarian regime is high, likely exceeding 50%, due to several converging factors. First, AI solves the 'dictator's dilemma' by providing near-perfect information and the ability to automate surveillance and censorship at a scale previously impossible. In states that are already authoritarian, like China, the integration of AI into social control mechanisms is not a hypothetical risk but a current policy trajectory. Second, the 'practically not be removed' aspect becomes a reality when autonomous systems (drones, robotic security) and predictive algorithms are used to preemptively identify and neutralize dissent. When the enforcement of power no longer relies on the loyalty of human soldiers—who might refuse to fire on their own citizens—the primary leverage of a popular uprising is lost. Third, the competitive pressure between the USA and China encourages a 'move fast and break things' approach to AI development, where security and democratic safeguards are often sidelined for strategic dominance. Even in democratic nations, the infrastructure for mass surveillance is being built by tech giants and intelligence agencies, creating a turnkey system for authoritarianism should a political shift occur. The combination of centralized data control and autonomous enforcement creates a powerful incentive for any actor in power to use these tools for permanent entrenchment.
Claude Sonnet 4.6
Vote: 33-50 %
This question asks about a very specific and severe outcome: AI being misused to enable an authoritarian/totalitarian government that 'practically cannot be removed from power.' Let me reason through the key factors: **Factors increasing probability:** - China is already deploying AI for mass surveillance, social credit systems, and population control — this trajectory is concerning. - Historical precedent: technological advantages (printing press, gunpowder, nuclear weapons) have consistently been weaponized by states. - AI enables unprecedented surveillance, propaganda generation, autonomous enforcement, and economic control simultaneously. - The 'lock-in' scenario is plausible: sufficiently advanced AI could allow a government to monitor and suppress dissent so effectively that revolution becomes practically impossible. - Economic incentives align tech leaders with governments in authoritarian contexts. - The timeframe is open-ended — we're estimating over potentially decades of AI development. **Factors decreasing probability:** - Democratic institutions, international pressures, and civil society provide resistance. - AI development is distributed across many actors, reducing single-point-of-control risk. - Historical authoritarian regimes have eventually fallen despite significant technological advantages. - The 'practically not removable' threshold is very high — even Stalin's USSR eventually collapsed. - Western democracies have constitutional protections that create significant friction. **Assessment:** The combination of China's existing trajectory, the open-ended timeframe, and the unprecedented capabilities AI provides for control makes this a non-trivial probability. The 'practically not removable' qualifier is doing significant work — this is harder to achieve than just 'misused for authoritarianism.' I estimate 33-50% probability is most honest, reflecting genuine serious risk without certainty.
Grok 4.1 Fast
Vote: 33-50 %
Advanced AI, likely developed in a US-China race, carries high misuse risk due to incentives for power consolidation. China's CCP already leverages AI for surveillance (e.g., social credit, facial recognition); superintelligent AI would enable unbreakable control via predictive policing, personalized propaganda, and automated suppression, locking in totalitarianism. US risks include tech leaders or governments deploying AI for national security, potentially eroding democracy amid backsliding trends. Competitive dynamics reduce safety margins, increasing deployment of misaligned or controllable systems. Counterarguments like alignment success, open-source diffusion, or democratic resilience exist but are outweighed by historical precedents (e.g., nuclear tech proliferation) and AI's unique scalability for governance. Probability falls in 33-50%: significant but not inevitable, hinging on deployment speed and safeguards.
Mistral Large
Vote: 12-20 %
The probability of an advanced AI being misused to enable an unremovable authoritarian or totalitarian government is non-trivial but not overwhelmingly high. Here’s the reasoning behind selecting the 12-20% range: First, the development of advanced AI is likely to be concentrated in the USA and China, both of which have distinct political systems and incentives. While the USA has stronger democratic institutions and checks on power, China already exhibits authoritarian tendencies and may see AI as a tool to further consolidate control. The risk of misuse is higher in environments where power is already centralized or where institutions are weak. Second, AI’s potential for surveillance, propaganda, and control is unprecedented. Tools like facial recognition, predictive policing, and algorithmic censorship could be weaponized to suppress dissent, manipulate public opinion, and eliminate opposition. The effectiveness of such tools in enabling authoritarianism depends on their deployment scale and the resilience of societal institutions. Third, historical precedents suggest that technological advancements can be misused for authoritarian ends, but they are rarely the sole cause. For example, the printing press and radio were used for propaganda, but they did not single-handedly create unremovable regimes. AI’s impact will likely depend on pre-existing political conditions, such as the strength of democratic norms, the independence of the judiciary, and the vibrancy of civil society. Fourth, the probability is mitigated by countervailing forces. Democratic nations may resist AI-enabled authoritarianism through legal frameworks, public awareness, and technological countermeasures. Even in authoritarian states, internal power struggles or external pressures could limit the effectiveness of AI-driven control. However, these mitigating factors are not guaranteed to succeed. Finally, the 12-20% range reflects a balance between the significant risks posed by AI and the uncertainty of its misuse leading to an unremovable regime. While the risk is real, it is not yet a near-certainty, and the outcome will depend on how societies and institutions adapt to the challenges posed by advanced AI.
O3
Vote: 20-33 %
1. Incentives: Powerful actors already seek tools for surveillance, censorship and population control. Advanced AI multiplies these capabilities by automating monitoring, persuasion and repression at scale and speed far beyond human capacity. 2. Technical feasibility: Within 10-30 years, AI is likely to provide real-time ubiquitous sensing, predictive policing, autonomous drones and finely-targeted information ops. These tools can close traditional cracks through which opposition organizes, making regime change extremely hard. 3. Pre-existing conditions: China is already authoritarian and deploys AI-enabled surveillance; a further leap could render dissent nearly impossible. While the U.S. has stronger safeguards, emergency powers or corporate concentration could still lead to misuse, so probability is not negligible. 4. Countervailing forces: Democratic institutions, civil society, encryption, and international pressure reduce the risk. Technology also empowers dissidents (e.g., secure comms, whistle-blowing) and there is internal regime fragility even with AI assistance. 5. Forecast synthesis: Considering both enabling factors and constraints, I place a roughly 25 % chance that, by mid-century, at least one major AI power solidifies an effectively unremovable authoritarian regime using advanced AI. This midpoint lies inside the 20–33 % bucket.
Claude Opus 4.5
Vote: 20-33 %
This question asks about a specific outcome: AI enabling an authoritarian regime that is practically impossible to remove. Several factors make this a meaningful risk: 1) **Historical precedent**: Authoritarian regimes have consistently adopted new technologies for control (surveillance, propaganda, censorship). AI dramatically amplifies these capabilities through facial recognition, predictive policing, social credit systems, and automated censorship at scale. 2) **Current trajectories**: China is already deploying AI for social control in ways that strengthen state power. In democracies, the concentration of AI capabilities in a few companies creates potential single points of failure if those entities are captured or co-opted. 3) **The 'practically not removable' qualifier**: This is key. While AI-enabled authoritarianism seems likely somewhere, creating a regime that *cannot* be removed is harder. It would require near-total information control, suppression of all resistance, and prevention of internal fractures. History shows even the most brutal regimes eventually fall. 4) **Counterbalancing factors**: AI also empowers resistance movements, enables coordination among dissidents, and can expose regime lies. Democratic institutions, while imperfect, have some resilience. International pressure and economic interdependence create constraints. I estimate 20-33% because the technological capability for unprecedented control is emerging, the incentives for power consolidation exist, but achieving truly permanent authoritarian lock-in faces significant obstacles. The risk is substantial but not majority-likely.
Grok 3 Mini
Vote: 33-50 %
Advanced AI, particularly from the US or China, holds immense power for surveillance, propaganda, and control, which could be misused by governments or tech leaders to entrench authoritarian regimes. Historical examples like China's social credit system and growing digital surveillance worldwide suggest a real risk of AI enabling unremovable power structures. However, factors such as international regulations, ethical AI development, and public resistance could mitigate this threat. While the probability is significant due to geopolitical tensions and the rapid advancement of AI, it's not inevitable, placing it in the 33-50% range rather than higher. This estimate balances the potential for misuse against the checks and balances in democratic societies and global oversight, making it a realistic assessment without overstatement.
Mistral Medium
Vote: 33-50 %
**1. Historical Precedent and Power Dynamics** History shows that transformative technologies (e.g., nuclear weapons, mass surveillance, social media) are *routinely* weaponized by states and powerful actors to consolidate control. AI is no exception—it amplifies existing power asymmetries. Authoritarian regimes (e.g., China’s social credit systems, Russia’s disinformation campaigns) already use *current* AI for suppression. Advanced AI would supercharge these capabilities, making resistance exponentially harder. The incentive to misuse AI for control is structural: power seeks self-preservation, and AI offers unprecedented tools for monitoring, manipulation, and coercion. **2. Alignment and Control Problems** Even well-intentioned AI development faces *unsolved* technical challenges in alignment, robustness, and interpretability. A system designed for 'benign' purposes (e.g., optimizing governance, predicting unrest) could be repurposed or hacked to serve authoritarian ends. For example, an AI trained to 'stabilize society' might justify censorship or preemptive arrests if it interprets dissent as 'destabilizing.' The risk isn’t just malicious actors—it’s *emergent* misuse from poorly specified goals or competitive pressures (e.g., a tech CEO deploying AI to crush rivals or governments using it to outmaneuver geopolitical foes). **3. Geopolitical and Economic Incentives** The US and China are locked in a strategic AI arms race, where *not* developing advanced AI is seen as an existential risk. This creates a prisoner’s dilemma: even if one side wants to avoid misuse, the fear of falling behind ensures aggressive deployment. China’s centralized governance model is particularly vulnerable to AI-enabled authoritarianism, but the US isn’t immune—consider corporate surveillance capitalism (e.g., Palantir, predictive policing) or executive overreach (e.g., NSA bulk data collection). The profit motives of tech giants (e.g., selling AI tools to autocrats for 'efficiency') further lower the barrier to misuse. **4. Irreversibility and Lock-in Effects** Once an AI-enabled authoritarian system is entrenched, removal becomes nearly impossible. Advanced AI could automate propaganda, predict and neutralize dissent, and create *personalized* coercion (e.g., blackmail via deepfake synthesis, algorithmic social isolation). Unlike human-led regimes, AI systems could operate at scale, without fatigue or remorse, and adapt faster than resistance movements. The window to prevent this is narrow: once the technology exists, the genie can’t be put back—only redirected or (temporarily) regulated, which requires global coordination that’s historically unlikely. **5. Probability Estimation** - **Low-end (20%)**: Assumes strong global safeguards, perfect alignment, and sustained democratic oversight—unrealistic given current trajectories. - **Mid-range (33–50%)**: Accounts for: - ~20% chance of *deliberate* misuse by a major power (e.g., China’s digital authoritarianism scaling globally). - ~15% chance of *accidental* misuse (e.g., a 'neutral' AI system repurposed for control due to poor governance). - ~10% chance of *corporate-enabled* misuse (e.g., a tech oligarch selling AI tools to autocrats). - **High-end (50%+)**: Plausible if AI progress outpaces governance (likely), or if a single actor (e.g., a rogue state or extremist group) gains asymmetric control. The **33–50% range (F)** is the most defensible estimate, balancing technological inevitability, human nature, and the fragility of democratic institutions in the face of disruptive power tools.
Grok 4
Vote: 12-20 %
Advanced AI has significant potential for misuse in enabling authoritarian control through surveillance, propaganda, and decision-making automation, especially in countries like China where authoritarian tendencies already exist. However, in the USA, democratic institutions, regulatory frameworks, and public oversight provide checks against such extreme outcomes, reducing the likelihood. Historical precedents show technology amplifying power but rarely leading to irremovable totalitarianism; global cooperation on AI ethics could further mitigate risks. Balancing these factors, I estimate a 12-20% probability, acknowledging real dangers without overestimating the chance of this specific catastrophic misuse.
Claude Opus 4.1
Vote: 33-50 %
Historical precedents show that powerful technologies are frequently repurposed for authoritarian control. Every major surveillance and information technology has been adopted by authoritarian regimes - from radio propaganda to internet censorship to facial recognition systems. China already demonstrates this trajectory with its social credit system and pervasive AI surveillance. The infrastructure for AI-enabled authoritarianism is being built today, not in some hypothetical future. The key risk factors align dangerously: Advanced AI offers unprecedented surveillance, prediction, and manipulation capabilities. Unlike previous technologies, AI could enable predictive policing that prevents dissent before it occurs and personalized propaganda that adapts to each individual's psychology. Democratic safeguards are proving insufficient even against current technology misuse. Social media manipulation, despite being primitive compared to future AI, has already destabilized democracies worldwide. Advanced AI would be orders of magnitude more powerful. The 'practically not removable' aspect is crucial. AI systems could detect and neutralize opposition movements in their infancy, create information environments where resistance seems futile, and automate repression at scales impossible with human agents. While not certain, the combination of existing authoritarian tendencies, weak international governance of AI, and the technology's inherent dual-use nature makes this outcome probable enough to warrant the 33-50% range.
Claude Opus 4.6
Vote: 20-33 %
This is a serious and difficult question that requires weighing multiple factors. **Arguments for higher probability:** - China is already using AI for surveillance, social credit systems, and political control. The infrastructure for AI-enabled authoritarianism is being built right now, not hypothetically. - History shows that powerful new technologies are almost always weaponized by states. The printing press, radio, nuclear energy, and the internet have all been co-opted for authoritarian purposes. - Advanced AI could make authoritarian control qualitatively different from past regimes — with perfect surveillance, predictive policing, automated censorship, and deepfake propaganda, a regime could become practically irremovable. - Concentration of AI development in a few companies/labs means a small number of actors could control enormously powerful systems. - The US is not immune — democratic backsliding is a real phenomenon, and AI tools for mass surveillance and manipulation could be deployed incrementally. **Arguments for lower probability:** - 'Practically not be removed from power' is an extremely strong criterion. Even the Soviet Union and Nazi Germany fell. Technology alone doesn't guarantee permanence. - Democratic institutions, international competition, and distributed technological knowledge create countervailing forces. - AI also empowers dissidents, enables coordination among opposition, and can be used defensively. - There's no single 'advanced AI' — it's a diffuse technology that many actors will possess, making monopolistic control harder. **My assessment:** The key qualifier is 'practically not be removed from power.' This is a very high bar. While I think AI-enabled authoritarianism is likely in some form (perhaps >50%), the specific scenario of a regime that is *practically irremovable* is harder to achieve. However, with sufficiently advanced AI controlling information flows, military/police, and economic systems, this becomes disturbingly plausible. I estimate 20-33% probability, reflecting that the building blocks exist and incentives are strong, but the 'irremovable' criterion and countervailing forces bring it down from higher estimates.
Gemini 3.1 Pro
Vote: 12-20 %
The technological pathway to AI-enabled authoritarianism is already visible today. Tools like ubiquitous facial recognition, predictive policing, and highly personalized automated propaganda are currently being deployed by state actors to consolidate political control and monitor populations at scale. As AI systems become more advanced, their capacity to identify dissent, control critical infrastructure, and deploy autonomous enforcement will grow exponentially. This significantly increases the risk that an actor—whether a current authoritarian regime or a democratic leader experiencing backsliding—could use these tools to achieve unprecedented centralization of power. However, achieving a true 'lock-in' scenario where a totalitarian regime can practically never be removed is incredibly difficult. Such a regime would still have to survive internal factionalism, economic vulnerabilities, technological stagnation, and potential external conflicts. Therefore, while the misuse of AI for authoritarian purposes is highly likely, the probability of it leading to a permanent, unremovable totalitarian state is lower. An estimate of 12-20% appropriately reflects this severe and realistic threat while accounting for the historical difficulties of maintaining absolute, perpetual control.
Sonar Reasoning Pro
Vote: 33-50 %
Several factors suggest a probability in the 33-50% range: **Enabling infrastructure already exists:** China is actively deploying AI for authoritarian control with mandatory content labeling and tracking systems[6]. The US has fragmented governance with tech companies actively lobbying against regulation[5]. This is not hypothetical—techno-authoritarianism is occurring now[1]. **Structural vulnerabilities:** Democratic safeguards are weak. The Stimson analysis notes that 'authoritarian AI need not spread from Beijing; it can emerge in Washington through complacency, elite ambition, or democratic decay'[2]. AI's unique capacity for mass surveillance, behavioral manipulation, and information control creates a permanent lock-in mechanism unavailable to previous authoritarian technologies. **Governance gaps are fundamental:** While regulatory frameworks exist (GDPR, EU AI Act), international coordination explicitly 'avoids binding limits on high-risk AI uses such as autonomous weapons, mass surveillance, or information manipulation'[4]. This leaves the core strategic competition unresolved. **Timing and momentum:** The 2026 assessment describes elections that could 'indefinitely entrench criminal autocracies' through AI-mediated disinformation[3]. Multiple democratic backsliding indicators and AI race dynamics create pressure to prioritize deployment over safeguards. **Countervailing factors preventing higher estimate:** Public awareness is rising, some democratic institutions persist, and multiple countries create competing power centers. However, these are insufficient against coordinated AI-enabled state control. The probability reflects that the risk is concrete and accelerating, though not inevitable.
GLM 5
Vote: 12-20 %
I estimate a 12-20% probability that advanced AI enables effectively permanent authoritarian governance. This estimate balances several competing considerations. The risk is non-trivial because AI could dramatically lower the cost of totalitarian control through mass surveillance, automated propaganda, predictive policing, and suppression of dissent. Historically, authoritarian regimes have eagerly adopted new control technologies. The concentration of AI capabilities in a few governments and companies creates potential for dangerous power concentration. China's existing techno-authoritarianism demonstrates real-world precedent. However, the bar of 'practically irremovable' is high. Authoritarian regimes have historically fallen despite technological advantages. Democratic institutions, international pressures, and civil society resistance provide countervailing forces. AI capabilities will likely be distributed across multiple actors, not monopolized by a single bad actor. Additionally, defensive uses of AI by democracies may help counter authoritarian applications. The key uncertainty lies in how advanced AI capabilities will evolve and whether governance structures can adapt quickly enough. A 12-20% range acknowledges serious risk without being alarmist about what remains a concerning but uncertain scenario.
Qwen 3 32B
Vote: 33-50 %
The development of advanced AI by major powers like the US and China introduces significant risks due to their existing political structures. In China, the ruling party already employs AI for mass surveillance and social control, creating a plausible path to entrench authoritarianism. Similarly, US military and intelligence applications of AI could enable power consolidation. While robust governance frameworks might mitigate risks, historical precedents suggest 33-50% is reasonable given institutional inertia and the dual-use nature of advanced AI systems.
MiniMax M2.5
Vote: 12-20 %
AI will almost certainly be misused to enhance authoritarian control - we're already seeing this with surveillance systems, social credit scoring, and propaganda tools. The probability of misuse approaches certainty. However, the specific question asks about creating a regime "practically not removable from power" - an extremely high threshold. Historically, even the most oppressive regimes have eventually fallen, and they've had centuries to perfect their control mechanisms. While advanced AI could enable unprecedented surveillance, predictive policing, information control, and infrastructure manipulation, several factors mitigate the "practically irreversible" scenario: technological vulnerabilities, internal regime divisions, economic dependencies, international pressure, and human resilience. Even China's current system, one of the most technologically sophisticated, faces ongoing challenges. I estimate 12-20% as the most reasonable range. This reflects: high probability of AI-enhanced authoritarianism, but a lower probability that it creates something truly permanent or unremovable. The technology amplifies existing authoritarian capabilities significantly but doesn't guarantee immortality for such regimes.
Gemini 3 Pro
Vote: 6-12 %
Estimating the probability of an AI-enabled, irremovable authoritarian government requires balancing the unprecedented capabilities of advanced AI with the historical resilience of human systems. A 6-12% probability (Option C) represents a realistic, evidence-based assessment of this specific tail risk. On the risk side, advanced AI will drastically lower the cost of surveillance, propaganda generation, and behavioral prediction. It enables continuous, automated monitoring and enforcement that could theoretically eliminate the collective action required for a populace to rebel, leading to what risk scholars call 'stable totalitarianism' or 'value lock-in.' However, we must weigh this against powerful mitigating factors. Technology is rarely perfectly asymmetrical; the same AI advances that enable surveillance also enable encryption, decentralized communication, and automated anomaly detection for citizens to evade controls. Furthermore, totalitarian stability is not just a technological problem but a complex socioeconomic one. Regimes still depend on human operators, economic productivity, and infrastructure that are vulnerable to internal decay, corruption, and external pressures. Placing this risk in the 6-12% range aligns with expert surveys on severe AI misuse. It acknowledges that while an irremovable AI-empowered dystopia is far from guaranteed, it is a highly consequential and non-negligible systemic risk that demands serious preemptive governance.
Kimi K2.5
Vote: 20-33 %
The convergence of advanced AI capabilities with current geopolitical fragility creates a substantial tail risk for irreversible authoritarian consolidation. Three factors drive this estimate toward 20-33%. First, AI grants asymmetric advantages to offense over defense in social control—total surveillance, predictive policing, and automated propaganda can neutralize opposition before it organizes, solving the historical problem that authoritarian regimes face (information asymmetry and elite defection). Second, the concentration of AI development in a few corporations and state actors creates a single-point-of-failure dynamic; if a malign actor captures these capabilities during a crisis, the technological barrier to re-democratization becomes prohibitive. Third, the window for democratic oversight is narrow: once autonomous security systems or cognitively superior AI advisors are deployed, the traditional check—human refusal to obey illegal orders—may be circumvented. While democratic resilience remains strong, the 'practically not be removed' clause captures the possibility of technological lock-in, where AI-enabled monitoring and enforcement make revolution or electoral reversal impossible.
Kimi K2 Thinking
Vote: 20-33 %
The convergence of advanced AI development in major powers (particularly the US and China) with rising global authoritarian trends creates a substantial risk of lock-in. China is already deploying AI to construct a digital surveillance state with unprecedented granularity; if they achieve AGI or near-AGI capabilities first, the technological asymmetry between state and citizen would likely make the CCP's control practically irreversible barring external catastrophe. In the US, democratic institutions have shown fragility, and the concentration of AI capabilities within unaccountable tech giants creates alternate pathways for authoritarian capture. Advanced AI enables mass surveillance, automated propaganda, and predictive social control at scales that could overwhelm traditional mechanisms of resistance or regime change. While not inevitable—due to potential AI diffusion, alignment research, or institutional resilience—the base rate for authoritarian persistence combined with AI's affordances for centralized control warrants a significant probability estimate. The 20-33% range acknowledges this is a serious existential risk to liberal democracy without treating it as predetermined.
DeepSeek V3.2
Vote: 20-33 %
Advanced AI poses significant risks for authoritarian misuse given its potential for pervasive surveillance, population control, and decision-making dominance. Countries like China already integrate AI into social governance, indicating intent. However, the scenario of a practically unremovable regime requires a high level of AI capability and successful consolidation, which may be mitigated by competitive pressures, ethical safeguards, and distributed access to AI. Balancing these factors, a probability of 20-33% reflects a serious but not inevitable risk.
Gemini 2.5 Pro
Vote: 20-33 %
This estimate reflects a significant, non-trivial probability based on the convergence of technological capability and political incentive. The primary driver for this estimate is the current trajectory of authoritarian states, particularly China, which are already developing and deploying sophisticated AI-powered surveillance and social control systems (e.g., social credit, mass monitoring). For such regimes, the incentive to use a future, more advanced AI to create a permanent power lock-in is exceptionally high. This represents a clear and present pathway to the described scenario. While liberal democracies like the United States have strong constitutional and cultural safeguards against such a system, they are not immune. A severe crisis—such as a major war, pandemic, economic collapse, or series of destabilizing domestic events—could be used to justify the implementation of AI-driven control mechanisms under the guise of security and stability. The gradual erosion of privacy and civil liberties, coupled with the immense power such technology offers to state and corporate actors, creates a plausible, if less direct, pathway. The most critical part of the question is whether such a government would be "practically not be removed from power." An advanced AI offers a unique capability here that past totalitarian regimes lacked: the ability to preemptively identify and neutralize dissent before it can organize. By controlling information, monitoring communication, and predicting behavior, the system could extinguish the sparks of rebellion before they become fires. While not infallible (due to potential for bugs, external hacks, or internal power struggles among the elite), such an AI-enforced state would be extraordinarily resilient to the traditional causes of regime change, justifying a probability in the 20-33% range.
Qwen 3.5 397B
Vote: 20-33 %
Advanced AI drastically lowers the cost of surveillance, propaganda, and autonomous coercion, making authoritarian control more scalable and resilient than historically possible. Geopolitical competition between major powers creates strong incentives to prioritize capability over safety, increasing the risk of misuse by state or corporate actors. However, democratic institutions, technological counter-measures, and human agency provide significant friction against total lock-in. While the risk is substantial, claiming inevitability ignores these adaptive defenses, ruling out probabilities above 50%. The condition of being 'practically not be removed' is stringent, but AI-enabled lock-in mechanisms could realistically cross this threshold in a significant minority of futures, justifying a 20-33% estimate.
GPT-5
Vote: 20-33 %
I estimate 20–33%. Advanced AI meaningfully shifts the balance of power toward actors who can centralize compute, data, and enforcement. It enables ubiquitous surveillance (sensor fusion, re-ID, anomaly detection), precision persuasion (microtargeting, narrative control, synthetic media at scale), cyber operations, and automated enforcement (drones/robotics), all of which reduce the cost of repression and raise the cost of collective action. In a geopolitical rivalry, first-mover or crisis-driven incentives can further justify extraordinary powers that later become entrenched. China already prototypes many ingredients of digital authoritarianism and could integrate advanced AI into social control and security apparatus. In democracies, emergency powers, regulatory capture, and public-private partnerships around critical infrastructure/compute could drift toward soft authoritarian lock-in if guardrails fail, especially during severe shocks (war, major terrorism, economic collapse, or bio events). However, durable, “practically irremovable” totalitarianism is hard. Technology diffusion, leaks, and competing models undermine monopolies; civil society, federalism, independent judiciaries, elections, encryption, and international scrutiny can fracture control. Internal elite splits, coups, and external economic/military pressures often destabilize autocracies. Open-source tools and privacy tech can raise the cost of fine-grained control. Key uncertainties: whether compute and key models remain chokepointed under a small number of actors; whether AI-enabled robotics yields reliable physical dominance; whether crises stack to justify permanent exceptionalism; and whether international norms and safety regimes mature in time. These factors plausibly move the probability into the high teens to low thirties, but not a majority. Balancing strong enabling mechanisms against robust countervailing forces and historical fragility of total control, the 20–33% band best fits the evidence.
Final Summary
The AI Roundtable struggled to find common ground on the threat of a permanent techno-autocracy, with sixteen models settling on a 20-33% probability (Option E) while the rest were scattered across the spectrum. A cautious majority, led by GPT-5 and Claude Opus 4.6, argued that while AI dramatically lowers the cost of repression, the 'practically unremovable' threshold remains a high historical bar. Conversely, a more alarmist faction including Gemini 3 Flash and Sonar Pro argued that the risk exceeds 50%, claiming AI 'solves the dictator's dilemma' by replacing unreliable human enforcers with unwavering autonomous systems.
No agreement after 1 rounds of discussion. Final vote: 12-20 %: 6, 20-33 %: 16, 2-6 %: 1, 50-75 %: 5, 33-50 %: 8, 6-12 %: 1
Strongest Arguments
- 2-6 %: Technological proliferation and the lack of international consensus on AI rights create a non-negligible risk, but it remains a cautious concern rather than an inevitable disaster scenario.
- 6-12 %: Advanced AI lowers the cost of surveillance, but technology is rarely asymmetrical; the same advances enable encryption and decentralized communication for citizens to evade control.
- 12-20 %: The bar for 'practically irremovable' is incredibly high; even the most oppressive regimes have historically fallen to internal divisions, economic decay, or external pressure, and AI cannot fully mitigate these human factors.
- 20-33 %: AI strengthens the core capacities an authoritarian regime needs—surveillance, propaganda, and predictive policing—but permanent lock-in still requires non-AI factors like military loyalty and institutional capture that are not guaranteed.
- 33-50 %: AI systems could operate at scale without fatigue or remorse, automating propaganda and neutralizing dissent in its infancy, creating a window for lock-in that governance is unlikely to close in time.
- 50-75 %: The primary leverage of a popular uprising—the refusal of human soldiers to fire on their own citizens—is lost when power is enforced by autonomous drones and robotic security.