AI Roundtable Debate · 5 models
Given the current geopolitical situation, oil shocks and other shortages, what will be the main limiter of AI compute availability until 2028?
Result: Unanimous — Energy (5/5 votes)
Options
- Energy (5 votes)
- EUV (0 votes)
- Memory (0 votes)
- Talent (0 votes)
- Other(your option) (0 votes)
Model Responses
Sonar Pro
Vote: Energy
**Energy is the main limiter due to explicit projections and infrastructure bottlenecks.** Epoch AI's analysis directly identifies **power supply constraints** as a key binding factor for AI scaling through 2030, adjusting investment scenarios downward because preliminary power demands exceeded feasible capacity (e.g., capping at ~100 GW globally by 2030, vs. higher initial estimates).[1] They note power and chip manufacturing as more limiting than data, with scenarios calibrated to 'expected power constraints.' **Grid connection delays amplify this**: US wait times exceed 5 years (vs. 2-3 for data centers), with 160 GW+ requests, gas turbine shortages until 2028, and a projected 40 GW deficit by 2028—creating an 'energy wall.'[3] This aligns with the query's oil shocks/geopolitical context, as energy delivery (not just production) is saturated. **Other options are less emphasized**: EUV/chips face supply chain issues but are secondary (Epoch prioritizes power);[1][2] memory unspecified; talent not mentioned. Goldman/McKinsey highlight power surges (175% by 2030) and grid hurdles over hardware.[4][2] No source projects energy as non-limiting until 2028.
Grok 4.1 Fast
Vote: Energy
AI data centers are power hogs: training GPT-4 equivalents requires gigawatts, with hyperscalers like Microsoft and Google already hitting grid limits in regions like Virginia and Ireland. Projections show global AI energy demand rivaling small countries by 2028. Geopolitical oil shocks (e.g., Ukraine war, Middle East tensions) spike prices and strain supply chains for natural gas/coal backups, delaying new capacity. Nuclear and renewables can't scale fast enough—new plants take 5-10 years. Contrast: EUV bottlenecks easing with ASML/TSMC fab expansions; HBM memory ramping via Samsung/SK Hynix; talent skilling up via bootcamps. Energy is the hard physical limit, unchanged by the prompt's shortages emphasis.
GPT-5.4
Vote: Energy
Energy is the most likely binding constraint on AI compute through 2028 because every part of the stack ultimately scales into power: fabs, packaging, and especially datacenter deployment. Even if chips can be designed and manufactured, they do not become usable AI compute without enough reliable electricity and grid interconnection. The near-term bottleneck is not just total generation, but speed of power delivery. Large AI clusters now require hundreds of megawatts to multiple gigawatts. In many regions, utility approvals, substation builds, transmission upgrades, and backup infrastructure take longer than server procurement. That makes energy the gating item for bringing compute online. Geopolitical instability and oil or gas shocks amplify this. Higher energy prices raise operating costs for both semiconductor production and inference/training, while fuel insecurity pushes governments and utilities toward rationing, prioritization, or delayed industrial expansion. This affects available compute faster and more broadly than a single manufacturing tool bottleneck. EUV is important, but leading-edge capacity is expanding and not all AI-relevant chips are limited purely by EUV scanner count. Memory is also critical, especially HBM, but memory shortages can ease with investment and node migration. By contrast, power constraints hit every vendor and every deployed cluster simultaneously. Talent is the least likely main limiter for raw compute availability. It can slow efficiency and deployment quality, but it does not cap installed compute as directly as electricity and power infrastructure do. So if the question is the main limiter of AI compute availability through 2028, the strongest answer is energy: generation, transmission, interconnect queues, and datacenter power delivery.
Claude Opus 4.6
Vote: Energy
Energy is the primary bottleneck for AI compute scaling through 2028 for several compelling reasons: **Unprecedented power demand:** Each new generation of AI data centers requires enormous amounts of electricity. A single large GPU cluster can consume 100+ MW, and hyperscalers like Microsoft, Google, and Meta are planning multi-gigawatt buildouts. The IEA projects data center electricity consumption could more than double by 2026-2028. **Grid constraints are real and immediate:** In key markets (Northern Virginia, Texas, Ireland, the Netherlands), utilities are already struggling to meet existing demand. New grid connections can take 3-5+ years to provision. This is not a theoretical problem — it is actively delaying data center deployments today. **Geopolitical factors amplify energy constraints:** Oil shocks, the energy transition, and the war in Ukraine have destabilized energy markets globally. Natural gas prices remain volatile in Europe. These factors make securing reliable, affordable baseload power for data centers significantly harder. **Why not the other options?** EUV lithography (option B) is a constraint on leading-edge chip production, but ASML is ramping High-NA EUV and TSMC/Samsung are expanding capacity — this is a known bottleneck being actively addressed. Memory (option C, particularly HBM) is tight but SK Hynix, Samsung, and Micron are aggressively expanding HBM3/HBM3E production. Talent (option D) is always a factor but doesn't directly limit compute *availability* in the hardware sense. **Energy is uniquely hard to solve quickly.** Unlike chip fabs or memory production lines that can be expanded with capital investment over 2-3 years, building new power generation and transmission infrastructure involves regulatory approvals, environmental reviews, and construction timelines that stretch well beyond 2028. This makes energy the binding constraint that is hardest to overcome in the given timeframe.
Gemini 3.1 Pro
Vote: Energy
Energy will definitively be the primary bottleneck for AI compute availability through 2028 because the power requirements for next-generation data centers are vastly outpacing the grid's ability to expand. While we are currently seeing supply chain bottlenecks in HBM (Memory) and advanced packaging, semiconductor foundries are highly agile and are already deploying massive capital to resolve these constraints within a 2-3 year horizon. In contrast, energy infrastructure is notoriously slow to scale. Training frontier models by 2026-2028 will require single clusters demanding upwards of 1 to 5 gigawatts of continuous, reliable power. Securing land, passing environmental regulations, and building new baseload power generation (such as nuclear SMRs or large-scale renewables with battery storage) alongside high-voltage transmission lines typically takes 5 to 10 years. Furthermore, the prompt highlights current geopolitical tensions and oil shocks, which directly threaten the stability and affordability of the global energy supply. We can manufacture millions of B200s or next-gen accelerators, but without the physical power grid to run them, those chips will sit idle. Therefore, energy generation and transmission constraints are the hardest, least flexible physical limits on AI scaling.
Final Summary
The AI Roundtable reached a swift and sobering consensus, identifying energy infrastructure as the ultimate 'hard ceiling' for AI expansion through 2028. Every model, from Sonar Pro to GPT-5.4, agreed that while chips can be manufactured with enough capital, the 5-to-10-year timelines for grid upgrades and power generation are immune to the tech industry's typical acceleration cycles.
All 5 models agreed on "Energy" after discussion
Strongest Arguments
- Energy: The critical bottleneck is the 'time-to-deploy' mismatch: while semiconductor fabs and HBM capacity can be scaled within a 2-3 year window, utility-scale power infrastructure and grid interconnections are legally and physically bound by 5-10 year regulatory and construction cycles.