Compute Governance and the Last Physical Chokepoints
Why Licensing, Inspections, and Power Monitoring Matter More Than “Ethics Guidelines”
AI governance debates often drift toward the software layer: model behavior, bias audits, content moderation, explainability. Those issues matter. But they share a dangerous implication—that artificial intelligence is primarily a digital phenomenon, floating above the physical world and therefore governable mainly by norms, paperwork, and after-the-fact audits.
That framing breaks down at the frontier.
Frontier capability scales not just with clever algorithms, but with industrial inputs: advanced chips, data-center buildouts, specialized supply chains, and enormous electrical power. If society is serious about governing the most consequential AI systems—especially in a future where quantum integration may steepen development curves—then policy must take seriously the last reliable chokepoints: compute, hardware, and energy.
Software rules can be evaded. Physical constraints are harder to ignore.
The myth of software-only governance
There is a reason AI progress has been tightly correlated with investment in compute: scaling works. Empirical “scaling laws” research demonstrated predictable performance gains from increasing model size, data, and training compute across orders of magnitude—an observation that helped legitimize the industrial logic of “bigger training runs” as a path to better systems.^1 This does not mean scale is the only ingredient, or that it produces unlimited returns. But it does mean that capability is not merely a matter of code. It is a matter of resources.
Meanwhile, contemporary governance frameworks lean heavily toward voluntary or organizational best practices rather than enforceable constraints on frontier scaling. The NIST AI Risk Management Framework is explicitly voluntary and oriented toward risk management across the AI lifecycle, useful as guidance but not designed as a hard backstop against runaway capability races.^2 The OECD AI Principles similarly emphasize trustworthy AI, human rights, transparency, and accountability—foundational values, but not an enforcement mechanism.^3
Executive Order 14110—revoked in January 2025 but instructive as evidence of the governance impulse—leaned primarily on reporting requirements and standards development, not the imposition of hard capability ceilings.^4 These initiatives were meaningful, but they shared a common weakness: they assumed the state would remain able to observe, evaluate, and respond quickly enough as systems scaled. If iteration cycles compress, that assumption is fragile.
Compute governance is the corrective: it treats capability growth as an industrial process, not a purely digital artifact.
Compute is a physical constraint, not a metaphor
At the frontier, compute is not “cloud magic.” It is racks, chips, cooling systems, fiber, transformers, and long-term power contracts. It is physical footprint and electrical draw. This is precisely why compute governance is attractive as a policy lever: it re-anchors AI regulation in the realm where states already regulate aggressively—energy, export controls, industrial permitting, and national security.
The governance impulse was already gravitating toward compute before EO 14110’s revocation. That order had directed reporting requirements for companies developing certain high-risk models and for entities possessing large computing infrastructure.^5 Subsequent policy debates have continued to center compute thresholds as practical triggers for oversight—because compute can be measured, and infrastructure can be located.
Export-control policy has also been moving in this direction. The U.S. Department of Commerce’s Bureau of Industry and Security (BIS) has repeatedly updated controls on advanced computing integrated circuits and related technology. In early 2025, BIS issued an interim final rule revising controls on advanced chips and adding controls related to certain AI model weights—an explicit attempt to govern diffusion of capability through hardware and artifacts, not just behavior.^6 In May 2025, BIS publicly announced rescission of a prior “AI Diffusion Rule” while simultaneously emphasizing strengthened chip-related export controls, illustrating both the volatility and the strategic centrality of compute governance.^7
The details are less important than the direction: policymakers are already gravitating toward compute as a practical enforcement surface.
The question is whether they will formalize this into a coherent regime—or continue with ad hoc moves that lag the pace of frontier development.
Three governance instruments that can actually scale
Three proposals follow. They will sound “too real” to people accustomed to ethics frameworks. That is precisely why they matter.
Mandatory compute licenses
A compute licensing regime treats extreme-scale training and deployment like other high-consequence activities: not banned, but conditional. The core idea is straightforward:
- Above a defined compute threshold (e.g., training runs beyond an order-of-magnitude benchmark), an entity must obtain a license.
- Licensure requires meeting specific safety, security, and governance requirements.
- Training beyond the threshold without licensure becomes an enforceable violation, tied to physical infrastructure and energy use.
This is not radical in principle. We license nuclear facilities, chemical plants, large financial institutions, aviation systems, and pharmaceutical manufacturing. Licensing recognizes that some capabilities introduce systemic externalities large enough to justify ex ante controls.
The strongest argument for compute licensing is that it creates a governance on-ramp before systems reach the point where they are socially or strategically impossible to stop. It converts “trust us” into “show us,” and it does so at a layer that cannot be bypassed by changing terms of service.
Licensing also creates a natural place to require independent red-teaming, secure development practices, incident reporting, and model evaluation against standardized danger benchmarks—requirements that become toothless if applied only after deployment.
Critics will object that compute licensing will push development underground. The rebuttal is practical: frontier-scale compute is difficult to hide. The biggest training runs leave industrial fingerprints—hardware procurement, power draw, cooling buildouts, networking, and staffing.
A licensing regime would not eliminate clandestine activity. It would reduce casual, reckless scaling and create enforcement leverage against large actors whose infrastructure is hard to conceal.
International inspection regimes
Compute licensing works best domestically. But AI is transnational. Competitive dynamics will pressure states to defect unless oversight includes a credible international dimension.
This is where the analogy to nuclear safeguards becomes useful—not because AI is identical to nuclear weapons, but because the governance logic rhymes. The International Atomic Energy Agency (IAEA) uses safeguards and verification measures to provide confidence that nuclear material and technology are not being diverted to weapons purposes.^8 The IAEA’s mission includes “early detection” of misuse, backed by inspections and verification protocols.^9
AI governance could borrow the architecture without pretending the domains are interchangeable. An “IAEA-like” regime for frontier compute would focus on verifiable physical correlates:
- Registration and auditing of frontier compute clusters above agreed thresholds
- Verification of safety and security processes at licensed sites
- Internationally standardized reporting on extreme-scale training runs
- Cooperative mechanisms for investigating incidents, leaks, or suspected violations
This does not require inspectors reading model weights or source code. It requires inspectors verifying that sites claiming compliance actually operate within declared constraints—much as nuclear safeguards verify declared activities and detect suspicious inconsistencies.
Skeptics will say: “No major power will allow inspectors into sensitive facilities.” Yet nuclear safeguards exist precisely because states accepted that some transparency is preferable to uncontrolled proliferation. The political feasibility is hard, but not obviously harder than trusting each nation’s frontier labs to self-regulate while racing for strategic advantage.
The near-term objective is not global perfection. It is a coalition of serious states setting a standard that gradually becomes the cost of admission for advanced compute trade and cooperation.
Power-grid monitoring agreements
Compute governance becomes dramatically more enforceable when tied to energy.
Frontier data centers require large, stable electricity supplies. They sign power purchase agreements, build substations, and coordinate with utilities. All of this creates monitoring opportunities that are far more robust than monitoring software behavior.
A power-grid monitoring agreement would do three things:
- Create measurable triggers: electricity usage and load profiles can indicate when clusters scale past thresholds.
- Enable enforcement: licensing violations can be corroborated by energy telemetry and facility audits.
- Increase early warning: sudden capacity expansion signals capability escalation—valuable for governance and strategic stability.
This is not science fiction. Governments already monitor grid stability and critical infrastructure. The novelty is using that monitoring as an enforcement input for frontier compute licensing and international verification.
A plausible structure is a set of standardized reporting rules for utilities and data-center operators, coupled with privacy-preserving mechanisms that avoid exposing proprietary details while still enabling threshold-based compliance checks.
In other words: you do not need to know what the model is. You need to know that the cluster is large enough to matter.
“Arms race” is not a reason to avoid governance; it is the reason to build it
The hardest counterargument is familiar: “If we regulate, adversaries won’t. So regulation is self-handicapping.”
This argument is emotionally potent and strategically incomplete.
First, frontier compute is constrained by concentrated supply chains and specialized hardware. It is not infinitely democratized. That concentration creates leverage for cooperative enforcement.
Second, arms-race dynamics are precisely why verification regimes exist in other domains. The response to rivalry is not to abandon governance; it is to build governance mechanisms that remain credible under rivalry.
Third, partial governance is still valuable. Compute licensing can reduce reckless scaling by major domestic actors, improve safety and security practices, and create clearer liability when catastrophic negligence occurs. It can also reduce the probability of accidental or premature deployment driven by competitive panic.
From compute governance to conditional scaling
Compute licensing, inspections, and power monitoring are not ends in themselves. They are the enforcement architecture that makes a deeper principle possible:
Scaling should be conditional.
Not on whether a company claims good intentions. Not on whether a model has a glossy “Responsible AI” report. Conditional on meeting independently verifiable safety and security benchmarks commensurate with the capability being unleashed.
The governance logic developed throughout this series points here: as systems approach thresholds that might enable irreversible autonomous agency or destabilizing strategic effects, the burden of proof must shift from the public to the developer. The developer must demonstrate—not assert—adequate safeguards.
Compute governance is the most realistic way to operationalize that principle. It provides a measurable trigger in the form of compute and energy thresholds, a licensing gate that sets pre-deployment conditions, and a verification system through audits and ongoing monitoring.
Absent that architecture, governance remains downstream—perpetually reacting to systems already deployed, already copied, already integrated.
There are early signals that governments are at least willing to name the category of concern. The Bletchley Declaration from the 2023 AI Safety Summit recognized shared concern about frontier AI risks and encouraged cooperation on risk-based policies.^10 It was not a treaty, and it lacked enforcement mechanisms. But declarations can become the scaffolding for mechanisms, if the political will follows. Compute governance is what that will would look like in practice.
Conclusion: govern what is real
Part I argued that convergence may steepen the capability curve. Part II argues that society should respond by governing what is physically real: compute, chips, and power.
It is tempting to treat AI as a purely informational phenomenon and therefore to focus on informational governance. But the frontier is industrial. Which means it can be regulated with industrial tools: licensure, inspections, and infrastructure monitoring.
If quantum integration eventually steepens the curve, the urgency increases. But the governance logic does not depend on quantum. It depends on the fact that capability growth has physical dependencies that remain governable—if policymakers have the courage to treat them as such.
Software policies may set norms. Compute governance sets limits.
The future of AI safety may hinge less on what systems say, and more on what societies are willing to require before letting them scale.
Notes
- Jared Kaplan et al., “Scaling Laws for Neural Language Models,” arXiv (2020), https://arxiv.org/pdf/2001.08361.
- National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1 (January 2023), https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf.
- Organisation for Economic Co-operation and Development (OECD), “OECD AI Principles,” adopted May 2019, https://oecd.ai/en/ai-principles.
- Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Federal Register 88, no. 211 (November 1, 2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.
- Congressional Research Service, “Highlights of the 2023 Executive Order on Artificial Intelligence,” R47843 (April 3, 2024), https://www.congress.gov/crs-product/R47843.
- Bureau of Industry and Security, “Framework for Artificial Intelligence Diffusion,” Federal Register (January 15, 2025), https://www.federalregister.gov/documents/2025/01/15/2025-00636/framework-for-artificial-intelligence-diffusion.
- Bureau of Industry and Security, “Department of Commerce Announces Rescission of Biden-Era Artificial Intelligence Diffusion Rule, Strengthens Chip-Related Export Controls,” press release (May 13, 2025), https://www.bis.gov/press-release/department-commerce-announces-rescission-biden-era-artificial-intelligence-diffusion-rule-strengthens.
- International Atomic Energy Agency (IAEA), “Safeguards and Verification,” https://www.iaea.org/topics/safeguards-and-verification.
- International Atomic Energy Agency (IAEA), “Basics of IAEA Safeguards,” https://www.iaea.org/topics/basics-of-iaea-safeguards.
- UK Government, “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023” (November 2, 2023), https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.