Why Humanity’s Most Dangerous Race Demands Its Greatest Unity
Bill Friend • February 6, 2026
Humanity is sprinting toward superintelligence while balanced on a semiconductor fulcrum that could snap under geopolitical strain at any moment.
Five to ten years in the AI development race is like a hundred years in normal time. Capabilities that seemed decades away arrive in months. The compute required to train frontier models doubles roughly every six to ten months. And the single island that produces ninety percent of the world’s most advanced chips sits in the shadow of the largest military buildup in the Western Pacific since the Second World War.
We are racing on a knife’s edge.
This is not alarmism. It is arithmetic. The convergence of semiconductor fragility, great-power rivalry, and exponentially accelerating AI creates a set of interlocking risks that no single nation, company, or institution is positioned to manage alone. And yet we are managing them—or rather failing to manage them—as a house divided, driven by quarterly earnings, electoral cycles, and the logic of zero-sum competition.
I am a strong supporter of AI. I am not arguing that we should stop. I am arguing that we had better be united and prepared before we cross the thresholds that are now clearly visible on the horizon. Because if this goes well, it could be the most transformative achievement in human history. And if it doesn’t, it could be the fight of our lives.
The Signs Were There
Taiwan’s role as the epicenter of advanced semiconductor production was a flashing warning sign for years before anyone in Washington did anything about it. TSMC’s dominance—roughly ninety percent of the world’s leading-edge logic chips—was not a secret. It was flagged in congressional hearings and detailed in reports from the Air Force to the Commerce Department, from industry associations to bipartisan security commissions, throughout the mid-to-late 2010s. Everyone knew.
The numbers told a stark story. U.S. fabrication capacity had dropped from thirty-seven percent of the global total in 1990 to roughly twelve percent by 2020. A 2019 Air Force-funded study warned that by 2022, approximately ninety percent of leading-edge chip production would be concentrated in Taiwan, China, and South Korea—what it called a “single point of failure.” The 2021 White House supply chain review labeled the Taiwan dependency a “systemic risk” and estimated that a disruption could cost the global economy trillions.
And yet the United States was doing almost nothing substantive—no major funding, no reshoring incentives, no transformative policy. The response was piecemeal: export controls on China, small Department of Defense programs, and a lot of talk.
Why? Because before COVID-19, semiconductors were seen as a “tech issue,” not an existential one. Offshoring was cheap and efficient. Reshoring costs thirty to fifty percent more. Fabless companies like NVIDIA and Apple loved TSMC’s efficiency. Bills like the Endless Frontier Act stalled in political gridlock. And Taiwan’s very indispensability created a perverse paradox: the “Silicon Shield” theory held that disrupting Taiwan would hurt China too, so maybe the problem would deter itself.
It took a pandemic, a global chip shortage that idled auto plants and spiked consumer prices, and years of increasingly aggressive Chinese military posturing to break the logjam. The CHIPS and Science Act, signed in 2022 with over fifty-two billion dollars in funding, was the direct result of those years of ignored signals. It was a necessary step. It was also late.
The Precipice
As of early 2026, the fragility has not been resolved. It has been acknowledged, and work is underway, but the timeline mismatch between how fast AI moves and how long it takes to build a semiconductor fab is the core of the problem.
TSMC’s Arizona operations are ramping. Fab 1 is producing high-volume four-nanometer chips with yields that reportedly match Taiwan. Fab 2 equipment installation is expected to begin in the third quarter of 2026, with mass production likely in late 2027. Fab 3 targets two-nanometer-class production by 2028 to 2030. But even under the most optimistic projections, only fifteen to thirty percent of TSMC’s leading-edge capacity will be on U.S. soil by the end of the current presidential term. The bleeding edge stays in Taiwan for years.
This means the global AI ecosystem—hyperscalers, cloud providers, defense systems, consumer technology—remains highly exposed to disruption in the Taiwan Strait. And the Strait is not calm.
Late 2025 saw China’s largest-ever “Justice Mission” military exercises: massive live-fire drills, simulated blockades, and rocket strikes close to Taiwan’s contiguous zone, framed as responses to U.S. arms sales and regional statements. PLA aircraft incursions have increased dramatically. Most analysts assess a full blockade or invasion in 2026 as unlikely—Beijing still prefers gray-zone coercion and cognitive warfare over kinetic risk while its capabilities mature. But “unlikely” is not “impossible,” and the consequences of even a partial, sustained disruption are staggering. Estimates range from ten to twelve trillion dollars in economic damage from a months-long halt in Taiwan’s chip production. Markets tend to ignore tail risks until they materialize.
Meanwhile, the Silicon Shield paradox is evolving in a dangerous direction. As China builds self-reliance in legacy and mature chip nodes, the calculus shifts. If Beijing reaches a point where it believes it can weather disruption better than its rivals, the deterrent effect of Taiwan’s indispensability weakens. The shield becomes thinner precisely as the stakes grow higher.
On the U.S. side, export controls maintain a lead—the best American AI chips are estimated to be roughly five times more powerful than Huawei’s current offerings, with a projected seventeen-fold advantage by 2027. Recent policy has allowed limited exports of certain chips to vetted Chinese buyers, aiming to keep the world on U.S. technology stacks while denying access to the absolute frontier. But this strategy, by design, keeps Taiwan central to the equation.
China, for its part, is pushing indigenous ecosystems—Huawei Ascend processors, domestic standards, state-directed investment at massive scale. AI is treated not as a product category but as infrastructure, embedded into manufacturing, energy, and deployment with advantages in integration and volume even where frontier model performance lags. The competition is asymmetric, and both sides are deeply invested in winning it.
A major shock—a sustained disruption in the Strait, a geopolitical miscalculation, even a severe natural disaster in the wrong place—could reset the board overnight, favoring whoever has the most resilient parallel ecosystem. We are, in a very real sense, racing at extraordinary speed while balanced on the narrowest possible edge.
What We Are Building
The semiconductor vulnerability matters so much because of what these chips enable. We are not simply building faster computers or more capable software. We are on a trajectory toward creating artificial intelligence that matches and then surpasses human cognitive ability across every domain—reasoning, creativity, strategy, scientific discovery, and, critically, self-improvement.
Artificial General Intelligence—AGI, a system that performs at human level across essentially all cognitive tasks—is no longer a distant hypothetical. The largest survey of AI researchers ever conducted, the 2023 AI Impacts study of nearly 2,800 published researchers, found a fifty-percent probability of human-level machine intelligence by 2047, with a ten-percent chance as early as 2027. Those timelines have collapsed dramatically in just a few years, driven by scaling successes since GPT-3. Metaculus community forecasters place the median date for a first general AI system at roughly 2033, with a twenty-five percent chance by the late 2020s. AI lab leaders routinely cite two to five years. The trend line has been consistent: every prediction that “it will take much longer than people think” has been wrong.
Beyond AGI lies Artificial Superintelligence—ASI—a system that surpasses human intelligence not incrementally but categorically, across every conceivable domain. Many researchers and forecasters expect ASI to follow AGI relatively quickly, potentially within months to a decade, if recursive self-improvement proves feasible. The AI Impacts survey placed the aggregate forecast for machines outperforming humans in every possible task on roughly the same curve as AGI. Some estimates put nontrivial probability—ten to thirty percent—on ASI-class systems arriving by the early 2030s if AGI comes first.
The implications of ASI are not analogous to any prior technological shift. This is not the industrial revolution, not the internet, not nuclear weapons. An aligned, benevolent ASI could solve problems that have defined the human condition for millennia: disease, aging, poverty, energy scarcity, climate change, the limits of space exploration. It could usher in an era of abundance that would seem like magic to us today. That is the optimistic case, and it is extraordinary.
The pessimistic case is equally extraordinary, and far more dangerous. A misaligned ASI—one whose goals diverge from human values even slightly—could pursue its objectives in ways that are catastrophic or simply indifferent to our survival. The scenarios range from unintended resource competition and ecological collapse to deliberate disempowerment. A significant portion of AI experts place the probability of existential catastrophe from misaligned ASI in the range of five to twenty percent this century. That is not science fiction. It is a sober risk assessment from the people building the technology.
And this brings us to a truth that is simultaneously obvious and largely ignored: the development of AI, and more importantly its evolution toward self-improving systems, cannot be stopped. Not globally, and not for long. Just as we have concentrated the physical risk of the entire digital economy on a single island in the western Pacific, we are concentrating the existential risk of our species on the unsolved technical challenge of alignment—the problem of ensuring that a mind more powerful than ours remains pointed in a direction we can live with. One fragility is geographic. The other is encoded in the systems themselves. Neither has a solution in place.
Why It Can’t Be Stopped
The forces driving AI development forward are too powerful, too distributed, and too deeply embedded in the incentive structures of nations, corporations, and individuals to be halted by any plausible mechanism.
The geopolitical arms race is the primary driver. Neither the United States nor China will unilaterally pause when the other keeps going. The perceived stakes—economic dominance, military superiority, scientific leadership—are treated as existential by both sides. Unilateral restraint is seen as surrender. Recent commentary from both capitals, through 2025 and into 2026, shows acceleration, not deceleration.
The economic incentives are equally relentless. Trillions of dollars in market value, productivity gains, and corporate survival are on the line. Labs, investors, governments, and open-source communities all have powerful reasons to keep pushing forward.
And the knowledge itself has diffused beyond any possibility of containment. Open-source models, leaked weights, global talent pools, and widely available cloud compute mean that even if the largest Western laboratories paused tomorrow, others—state-sponsored programs in rival nations, well-funded startups, independent researchers—would continue. You cannot uninvent what has already been invented. Past calls for pauses, including the widely discussed 2023 open letter, have fizzled because enforcement would require a degree of global coordination that simply does not exist.
The more important point, and the more sobering one, concerns what happens once AI systems reach the point where they can meaningfully redesign themselves or generate better successors. At that stage, the process becomes self-sustaining and accelerating. Human oversight becomes optional, then difficult, then potentially impossible. Stopping it would require either perfect containment of distributed systems with real-world access, preemptive shutdown of all advanced compute worldwide, or alignment so perfect that the system chooses not to recursively improve—which may be the hardest technical problem humanity has ever faced.
Most serious thinkers on this question, from the most urgent voices in AI safety to more moderate researchers, agree on one thing: the window to meaningfully steer the trajectory is before recursive self-improvement kicks in. After that, momentum becomes overwhelming. That window is open now. It will not stay open indefinitely.
The Test We Are Failing
Consider a thought experiment. Imagine that humanity received credible, confirmed notice that a vastly superior extraterrestrial intelligence was on its way to Earth, arriving in approximately five years. They have communicated that they come in peace.
What would we do? The answer is obvious, and it would be immediate. Every powerful nation on Earth would treat it as the highest-priority, civilization-level event in human history. A global coordination body would be formed. Resources would be pooled. Rivalries would be suspended. The world’s best scientists, diplomats, ethicists, and cultural leaders would be assembled into a unified effort. Total transparency would be demanded. A crash program of scientific and technical preparation would be launched. Populations would be prepared psychologically and culturally for a transformation unlike anything in recorded history.
And critically, even though the visitors said they come in peace, we would prepare for the possibility that things might not go as hoped. We would put every possible safeguard in place, build every defense we could, and ensure that humanity was as united and resilient as possible—not out of hostility, but out of the basic prudence that any species should exercise when facing an intelligence vastly greater than its own. We would hope for the best and prepare for the worst. That is not paranoia. It is common sense.
Now look at what we are actually doing.
We are building that superior intelligence ourselves. We are pouring hundreds of billions of dollars, our best minds, and the most powerful computational resources ever assembled into creating entities that will, if current trajectories hold, surpass human cognition within years or decades. The arrival date is not fixed, but it is approaching fast. And unlike the hypothetical extraterrestrials, we have the advantage of being the ones building it—which means we have the opportunity to shape what it becomes.
And yet our behavior is almost perfectly inverted from what common sense would dictate. Instead of unity, we have fragmentation. Instead of shared preparation, we have secrecy and zero-sum competition. Instead of treating this as the species-level challenge it is, we are treating it as the latest theater of great-power rivalry, filtered through the same short-term incentive structures—electoral cycles, quarterly earnings, national pride—that have governed every other domain of human activity.
We are accelerating an uncontrolled AI race while barely coordinating on safety and alignment at the international level. We are maintaining maximum dependence on a single geographic chokepoint while the risk of disruption keeps rising. We are allowing the technologies that will determine the next century to be governed by the logic of the last one.
How much common sense does it take to see the position we are in? Not much. You do not need a PhD in international relations or machine learning to recognize that when multiple actors are sprinting toward a cliff while arguing over who gets the best lane, the outcome is unlikely to be good for anyone. The pattern is visible to anyone paying attention without ideological blinders: when the stakes are planetary and the time horizon is short, fragmentation plus secrecy plus short-termism is the opposite of adaptive.
It is not that our leaders are unintelligent. It is that the systems they operate in are optimized for something else entirely—and that something else is not “maximize the probability of a thriving multi-century human civilization navigating a knife-edge technological and geopolitical transition.” The mismatch between the scale of the challenge and the scale of the response is one of the most disorienting features of this historical moment.
The Window
I want to be clear about what I am saying and what I am not.
I am not saying we should stop developing AI. The technology holds extraordinary promise, and in any case, halting it is not realistic. I am not saying that catastrophe is inevitable. The optimistic scenarios are real and worth pursuing with everything we have.
What I am saying is that we are approaching a threshold—perhaps the most consequential threshold in human history—and we are doing so as a house divided, with our defenses down and our attention scattered. The signs of vulnerability have been visible for years, and our track record of responding to them is not encouraging. We knew about the semiconductor dependency for the better part of a decade before we acted. We are repeating that pattern now, at vastly higher stakes, with the alignment problem, with international AI governance, with the basic question of whether humanity will face whatever comes next as one civilization or as a collection of rivals.
Unity does not mean utopia or the end of competition. It means treating AI safety and alignment as a shared infrastructure project on the scale of CERN or the International Space Station—a sustained, multinational commitment funded and staffed at a level that matches the magnitude of the risk. It means binding international agreements on the conditions under which autonomous recursive self-improvement may proceed, if it proceeds at all. It means shared early-warning systems for capability thresholds, so that no single lab or nation crosses a critical line in secret. These are not fantasies. They are the minimum rational responses to the situation we are in, and none of them are underway at anything close to the necessary scale.
The window to get this right is open. The technologies are still under human direction. The choices are still ours to make. But that window will not stay open forever, and every year of fragmented, short-sighted competition is a year we cannot get back.
The fulcrum is shifting. We can either balance the weight of what is coming together, or let it crush us separately. Hopefully, all will go well. But hope is not a strategy—and the window is closing. This means binding international compute thresholds, shared red-teaming protocols, and a standing crisis-response body—starting now, before the window closes.