
For years, AI was talked about as if it lived nowhere.
It was “in the cloud,” which made it sound borderless, weightless, almost detached from the physical world. But that was always only half true. The cloud has buildings. It has land, substations, cooling systems, permits, cables, backup power, and legal jurisdictions. It can be attacked, regulated, localized, and denied.
That matters more now than it did even a year ago, because advanced AI is no longer just a software story. It is increasingly a story about infrastructure, geography, and power.
The clearest way to put it is this: AI is becoming geographically bounded by energy, law, security, and war. Trust in AI is fragmenting along the same lines.
In practice, that means the places where advanced AI can be built, hosted, and safely used are being determined less by engineering alone and more by geopolitics.
That is the shift.
The strikes on AWS facilities in the UAE and Bahrain this spring made that harder to ignore.[1] They exposed something that had been easy to miss while AI was still mostly discussed in terms of models, benchmarks, and product releases: hyperscale data centers are no longer just commercial real estate. They are strategic assets.[2] Once compute becomes strategically important, location stops being a question of efficiency and starts becoming a question of security.
That changes how we should think about AI.
The geography of AI is now being shaped by four hard realities.
The first is energy. Advanced compute consumes enormous amounts of electricity. Regions with abundant, reliable, affordable power have a natural advantage in hosting data centers and inference clusters. AI may feel digital, but at scale it is powered industry.
The second is physical security. A region can have cheap power, deep pockets, and serious ambition to become an AI hub, but if it also sits inside missile range, drone range, or a widening zone of sabotage risk, its appeal can change overnight. “Always-on” AI is only as dependable as the infrastructure underneath it, and infrastructure turns out to be very mortal.
The third is law. Even if a company wants to shift workloads during a crisis, data sovereignty rules may prevent sensitive data from leaving the country. Legal borders can become operational chokepoints.
The fourth is capital and time. AI infrastructure is expensive, specialized, and slow to replace. A damaged office can be relocated. A damaged data center cannot.
This is why the Middle East has become such a revealing case. It combines abundant energy, enormous capital, aggressive AI ambition, and rising regional insecurity. On paper, it looks like a natural AI hub. In geopolitical reality, it also shows the problem clearly: some of the places best positioned to host AI at scale may also be among the most exposed to the kinds of disruption that make large-scale dependence risky.[3]
And this is not just about where models are trained.
The old question was: where are the best models built?
The new question is: where can AI safely operate at scale?
That is a bigger question, and a more consequential one. It reaches well beyond labs and into finance, logistics, health systems, public administration, and defense. As inference becomes woven into everyday services, compute continuity stops being a convenience and starts becoming a matter of economic resilience and institutional stability.[4]
In that sense, AI is beginning to look less like ordinary software and more like a utility. Not exactly like electricity or telecom, but close enough for the comparison to matter. Once a state or a company depends on uninterrupted AI services, the infrastructure underneath those services becomes strategic whether anyone wants to call it that or not.
That is where the trust problem gets more complicated than most public discussion allows.
Trust in AI is no longer just about whether a model hallucinates, whether a chatbot sounds confident when it is wrong, or whether a company publishes a transparency report. Those things still matter. They are just no longer the whole picture.
Now trust also means trusting the infrastructure, the legal environment, the security posture, and the information environment surrounding the model.
Can this country host AI safely?
Can the infrastructure survive disruption?
Can regulators, firms, and the public verify what is happening?
Can the model’s outputs be trusted not only in the narrow sense of accuracy, but in the broader sense of integrity?
That last point matters more than it used to. A second trust problem is emerging alongside the infrastructure problem: model integrity. Poisoning attacks, manipulated training inputs, and corrupted information environments create a quieter form of instability.[5] A system does not have to be visibly hacked to become less trustworthy. It can be gradually contaminated. Its outputs can drift. Confidence can erode slowly, persistently, and at scale.
That matters geopolitically because it lowers the threshold for interference. A hostile actor does not always need to destroy AI infrastructure to damage trust in AI. Sometimes it may be enough to poison what flows into the system.
This is especially serious for countries that are becoming more dependent on AI while lacking the institutions to test, audit, and verify what they are receiving.[6]
That is one of the least discussed parts of the problem. The global spread of AI does not automatically produce global confidence in AI. In many places, it may do the opposite. States may find themselves relying on systems built elsewhere, hosted elsewhere, governed elsewhere, and only thinly understood at home.
That is dependency without control.
And dependency without control does not produce trust. It produces vulnerability.
This is why the geography of AI is not just about infrastructure placement. It is also about political dependence.
The emerging map is being redrawn by several pressures at once: U.S.-China rivalry across the full AI stack, competition to pull third countries into rival technological ecosystems, rising exposure of strategic regions to conflict, and regulatory fragmentation that forces firms to build different deployment and compliance models for different jurisdictions.[7]
Geopolitics no longer just shapes AI policy around the edges. It is increasingly deciding who hosts AI, who governs it, and who can safely depend on it.
None of this means the future is fixed. Actors are already building workarounds.
Edge AI and smaller models reduce dependence on centralized hyperscale facilities. Decentralized compute networks attempt to route around targeted or sanctioned infrastructure. Model smuggling, whether through hardware transfer or physical media, offers one way around export controls and local restrictions. Large firms may mirror workloads across rival blocs to reduce single-point failure.
But these are not clean solutions. They all come with costs.
Edge deployment sacrifices capability and scale. Decentralized systems struggle with coordination, verification, and economics. Smuggling introduces opacity, compliance risk, and supply-chain uncertainty. Mirroring workloads across rival blocs is expensive and legally messy.
These strategies do not erase the geography of AI. They confirm it. They show that actors are already paying real costs to escape geographic constraint.
That is why the most under-discussed implication may also be the largest.
AI may cease to be meaningfully global.
Not in the sense that models will stop spreading, but in the sense that the world may increasingly divide into AI zones: places that are secure, trusted, and richly provisioned; places that are dependent, thinly assured, and politically exposed; and places caught somewhere in between, trying to reduce reliance without being able to build full-stack autonomy.
If that happens, AI will become a new axis of geopolitical inequality.[8]
Not just between the companies that build frontier models, but between the places that are able to run, govern, verify, and trust AI at scale and the places that are not.
That is the part we should be paying more attention to.
The question is no longer simply who will build the most advanced model. The question is who will be able to host AI securely, govern it credibly, audit it independently, and keep it running under pressure.
For governments, that means AI policy can no longer sit only with innovation offices and tech ministries. It belongs alongside energy policy, telecom resilience, defense planning, and national infrastructure protection.
For companies, it means site selection for AI workloads is becoming a geopolitical judgment, not just a cloud or real-estate decision.
And for countries at risk of becoming permanent consumers of systems they do not control, it means the real contest is not only access to AI, but access to assurance.[9]
That may sound less glamorous than model capability. It is also more important.
Because in the end, AI will not be shaped only by what engineers can build. It will be shaped by what states can protect, what laws can permit, what grids can support, and what societies can trust.
AI has a geography now.
And that geography is starting to look a lot like the world’s existing fault lines.
Notes
[1] Faye Simanjuntak, “Iran Is Hitting Data Centers in the Gulf. It’s Strategic,” Asia Society Policy Institute, April 1, 2026.
[2] Oliver Jabbour, “When data centres become targets: It’s time to treat AI infrastructure as critical infrastructure,” World Economic Forum, April 2, 2026.
[3] Andrea Benito, “AI infrastructure investment in the Middle East enters a new geopolitical reality,” Computer Weekly, March 24, 2026; Oliver Jabbour, “When data centres become targets: It’s time to treat AI infrastructure as critical infrastructure,” World Economic Forum, April 2, 2026.
[4] Oliver Jabbour, “When data centres become targets: It’s time to treat AI infrastructure as critical infrastructure,” World Economic Forum, April 2, 2026; Economist Impact, “Foundations at Risk: Building Resilient Digital Infrastructure,” accessed April 17, 2026.
[5] Atlantic Council experts, “Eight ways AI will shape geopolitics in 2026,” Atlantic Council, January 15, 2026.
[6] Talita Dias, “Closing the AI Assurance Divide: Policy Strategies for Developing Economies,” Partnership on AI, February 18, 2026.
[7] Aryamehr Fattahi, “Global Fragmentation of AI Governance and Regulation,” Bloomsbury Intelligence and Security Institute, January 30, 2026; “How the world can build a global AI governance framework,” World Economic Forum, November 10, 2025.
[8] Michael Muthukrishna and Philip Schellekens, “The Next Great Divergence: How AI could split the world again if we don’t intervene,” Brookings Institution, January 8, 2026; “The political geography of AI infrastructure,” Oxford Internet Institute, University of Oxford, accessed April 17, 2026; Doug Specht, “The geopolitical fragmentation of artificial intelligence,” Geographical, January 19, 2026.
[9] Talita Dias, “Closing the AI Assurance Divide: Policy Strategies for Developing Economies,” Partnership on AI, February 18, 2026.