Quantum AI and the Race to Govern Artificial Superintelligence – Part IV

Geopolitical Compression and the U.S.–China AI Race


Why Strategic Competition Makes Governance Harder—and More Necessary

Technological governance does not occur in a vacuum. It occurs inside rivalry.

Advanced artificial intelligence is now explicitly embedded in national strategy for both the United States and China. The 2021 report of the U.S. National Security Commission on Artificial Intelligence warned that leadership in AI will shape military, economic, and geopolitical power in the twenty-first century.¹ China’s 2017 “Next Generation Artificial Intelligence Development Plan” declared AI a strategic priority and called for global leadership by 2030.² Neither government treats AI as a niche innovation. Both treat it as foundational infrastructure.

Add quantum computing to this landscape, and the stakes intensify. Quantum systems promise potential advantages in cryptography, materials science, optimization, and simulation—domains directly relevant to both economic competitiveness and national security. Even if practical quantum advantage remains limited in the near term, the perception of future advantage shapes behavior now. When emerging technologies are perceived as strategically decisive, competition compresses time.

Every major arms competition begins with the same underlying assumption: falling behind is unacceptable. The U.S.–China AI race is already framed in terms of strategic advantage. AI integration into military logistics, intelligence analysis, autonomous systems, and cyber operations is proceeding in both countries. Semiconductor export controls and counter-controls signal that compute infrastructure itself is now recognized as a geopolitical asset. In such an environment, the logic of speed dominates.

Speed produces predictable distortions. Policymakers and technologists begin to assume that delay creates vulnerability. Caution becomes synonymous with weakness. Long-term safety concerns are discounted in favor of near-term competitive gain. Transparency, once treated as a stabilizing norm, becomes strategically risky because revealing capabilities or limitations may expose an advantage to rivals. Each of these pressures undermines governance precisely when governance is most needed. The most corrosive distortion, however, is what speed does to transparency. In ordinary contexts, transparency functions as a stabilizing norm—states signal capabilities, acknowledge limitations, and reduce the uncertainty that feeds miscalculation. Under competitive pressure, this logic inverts. Revealing what you can or cannot do may expose an advantage to rivals. Transparency becomes a liability. And once transparency becomes a liability, the informational foundations that governance depends on begin to erode precisely when they are most needed.

If convergence between advanced AI and scalable quantum computing steepens capability curves—as explored in Part I—the compression intensifies. Faster iteration cycles mean shorter windows for oversight. Perceived breakthroughs increase pressure to accelerate rather than pause. The race dynamic does not require hostility; it requires uncertainty. And uncertainty, in strategic contexts, is enough.

History offers a sobering pattern. The early nuclear era was marked not by stable deterrence theory but by rapid escalation, secrecy, and testing. Only after dangerous proximity to catastrophe did arms control architectures mature. The Biological Weapons Convention and Chemical Weapons Convention likewise emerged after recognition that unconstrained development posed unacceptable risks. Emerging technologies that promise decisive advantage tend to destabilize equilibrium before stabilizing it.

Artificial intelligence differs from nuclear weapons in obvious ways. It is more diffuse, more commercially embedded, and more integrated into civilian infrastructure. But that diffusion increases, rather than decreases, the difficulty of strategic management. If either the United States or China believes that quantum-enhanced AI confers decisive advantage in cryptanalysis, cyber operations, autonomous military coordination, or strategic decision support, competitive escalation becomes rational from their perspective. Rational escalation, however, can still produce systemic instability.

Acceleration at the frontier compresses deliberation time. AI-driven decision-support systems integrated into military or intelligence pipelines may increase response speed in crisis scenarios. If quantum advances eventually undermine widely used encryption systems, strategic signaling channels could destabilize.³ Increasing automation in command-and-control pipelines may reduce the human deliberative buffers that historically slowed escalation. The danger in such a system is not malevolent intent but miscalculation under speed.

There is a paradox at the center of technological rivalry: technologies pursued for security can initially increase insecurity. Nuclear weapons eventually produced deterrence doctrines and arms control treaties, but that stabilization occurred only after intense expansion and brinkmanship. AI and quantum computing together introduce destabilizing variables that are still poorly understood. Decision acceleration, opacity of capability, cryptographic disruption, and autonomous integration each amplify uncertainty during transition periods. Even if none reach catastrophic extremes, each increases instability risk during the phase when norms and guardrails are weakest.

The standard objection to governance is blunt: the United States and China will not meaningfully constrain themselves in a domain they consider strategically decisive. This objection deserves serious consideration. Yet it overlooks structural realities.

Frontier compute is not infinitely democratized. Advanced semiconductor fabrication remains concentrated among a small number of firms and geographies. High-end chip manufacturing depends on complex global supply chains that are already subject to export controls and regulatory oversight. Hardware, unlike software, creates leverage. Recent export-control regimes demonstrate that states recognize this leverage and are willing to use it.

Moreover, states have historically cooperated in adversarial environments when mutual vulnerability becomes clear. Nuclear arms control did not require trust; it required recognition that unconstrained escalation was unacceptable. Verification regimes were constructed precisely because trust was insufficient. Partial cooperation proved preferable to none. The existence of rivalry did not eliminate governance; it shaped its design.

Even limited coordination can matter. Bilateral or plurilateral agreements on compute transparency, reporting thresholds, or inspection protocols could reduce ambiguity and miscalculation. In November 2023, countries attending the AI Safety Summit at Bletchley Park publicly acknowledged shared concern about frontier AI risks.⁴ The resulting declaration lacked enforcement mechanisms, and it would be a mistake to overread it. Yet the fact that the United States, China, and the European Union signed the same document—despite active rivalry in the underlying technology—demonstrates something more durable than optimism: that major powers can name a shared danger even when they are not prepared to jointly constrain it. That gap between recognition and action is precisely where a governance architecture must be built. Recognition without follow-through does not reduce risk. But it establishes a baseline from which formal obligations can be negotiated—as it did in the nuclear era, and as it must here.

The deeper structural point is this: strategic competition increases the cost of restraint, but it also increases the cost of miscalculation. As capability accelerates, the downside of uncoordinated escalation grows. In such an environment, governance becomes less about idealism and more about stability engineering.

Compute licensing, inspection regimes, and power-grid monitoring agreements—outlined in Part II—should therefore be understood not as moral appeals but as strategic tools. By tying scaling to verifiable thresholds, states reduce ambiguity. By monitoring extreme-scale compute clusters, they create early warning signals. By institutionalizing reporting standards, they decrease incentives for secretive overreach. Governance does not eliminate rivalry; it channels it.

In a purely competitive frame, every breakthrough must be matched immediately. In a governed frame, breakthroughs trigger review processes, safety audits, and shared transparency obligations. The alternative is a feedback loop of suspicion and acceleration. If one nation suspects the other of crossing a capability threshold—particularly in quantum-enhanced optimization or autonomous military integration—the pressure to scale preemptively intensifies. Without shared verification mechanisms, worst-case thinking dominates.

The risk is not a single runaway system. It is reciprocal escalation under compressed timelines.