Anthropic, the State, and the Soul of AI
On Friday, February 27, 2026, President Donald Trump directed federal agencies and military contractors to discontinue using Anthropic’s AI systems. The Pentagon was given six months to phase out Claude. Defense Secretary Hegseth branded Anthropic a “supply chain risk” — a designation typically reserved for foreign adversaries. The company’s response was unequivocal:
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.”
In that moment, what had begun as a procurement dispute became something considerably larger.
I have spent years mapping the terrain of AI governance — the race dynamic, the regulatory vacuum, the compounding risks of deploying systems whose power outpaces our institutional readiness to govern them. I am not a neutral observer. I am someone who has watched this landscape shift, studied its fault lines, and written extensively about where the fractures are most dangerous. I support the advancement of AI. I also believe, without reservation, that some lines must not be crossed — and that the willingness to hold those lines, at real cost, is one of the few things standing between responsible development and a future none of us should want. What happened on February 27 was a direct test of that principle, and Dario Amodei passed it.
The Tools of Coercion
To understand why this matters, consider the instruments the administration deployed. The Defense Production Act is a Cold War statute designed to mobilize private industry in the event of a national emergency — compelling manufacturers to produce equipment, materials, or services the government deems critical to national defense. It has been invoked during genuine crises: war mobilization, pandemic response. But those precedents involved physical goods: steel, protective equipment, vaccines. Using it to compel a software company to strip its own ethical guidelines from an AI model is entirely untested legal ground — there is no established precedent for the state commandeering not a factory’s output, but a technology company’s conscience. It is state power being turned not against a foreign threat or a market failure, but against a principled refusal, and the courts will have to decide whether the law’s reach extends that far.
The “supply chain risk” designation compounds this. That label exists to protect national security from infiltration by adversarial actors — Chinese telecommunications firms, entities with ties to hostile governments. Applying it to Anthropic, an American company whose safety commitments are the very thing distinguishing it from less scrupulous competitors, inverts the logic the designation was designed to serve. Amodei himself identified the contradiction precisely: you cannot simultaneously label a company a security risk and invoke emergency law to compel its continued use as essential to national defense. The incoherence is not incidental. It is the signature of coercion masquerading as policy.
The Hollow Promise of “Lawful Use”
The Pentagon’s demand was framed as simple and reasonable: allow military use of Claude for “all lawful purposes.” That framing deserves scrutiny. Regulatory frameworks governing artificial intelligence in military and surveillance applications are thin, contested, and routinely outpaced by the technology itself. The term “lawful” carries enormous weight in this argument while bearing almost none. Laws governing autonomous weapons systems are embryonic. Legal standards for AI-assisted mass surveillance of American citizens are largely unwritten. Accepting “lawful use” as the operative boundary is not a safeguard — it is an agreement to be governed by a legal architecture that does not yet exist in any meaningful form.
The word “lawful” also does not mean the same thing in every context, and that slippage is precisely where the danger lives. What is lawful for a domestic law enforcement agency differs from what is lawful for a foreign intelligence operation, which differs again from what is lawful in an active combat theater, which differs still further from what is lawful when AI systems are operating faster than any human can review their decisions. A blanket authorization to use Claude for “all lawful purposes” does not resolve those distinctions — it papers over them, leaving the interpretation to whoever holds operational authority at the moment of use. That is not a legal standard. It is a blank check written in legal language. This matters especially now, when executive authority is being stretched, oversight mechanisms are being weakened, and the definition of “lawful” is increasingly shaped by the very actors who most want these tools without constraint. Handing over unrestricted access on that basis is not in compliance with the rule of law. It is an abdication of the responsibility to help build it.
The Loneliness of the Stand
There is something worth sitting with in the particular solitude of Amodei’s position. He held firm, even as his own company had just days earlier rolled back its broader Responsible Scaling Policy under competitive pressure—an acknowledgment that the race dynamic had made unilateral safety commitments increasingly untenable. OpenAI and Google had already agreed to the Pentagon’s “lawful use” standard. The legal and regulatory frameworks that should have provided institutional backing for Amodei’s position do not exist. He was standing, in the end, on conscience alone.
That is both admirable and damning — admirable because it is rare, damning because it reveals how unprepared our institutions are for this moment. A person of principle should not have to stand alone against the full coercive apparatus of the executive branch to defend limits that a functional governance system would have codified long ago. That Amodei did so anyway, at the cost of a $200 million contract and the full weight of presidential opprobrium, is the kind of act that history sometimes vindicates long after the immediate crisis has passed.
History does offer instructive moments. When J. Robert Oppenheimer opposed the development of the hydrogen bomb on moral grounds in 1949, he was dismissed, marginalized, and ultimately stripped of his security clearance. The power structure of the day declared his objections obstructionist and his judgment suspect. Decades later, the questions he raised about the ethics of weapons development without commensurate wisdom to govern them remain among the most important questions humanity has ever faced. Whether Anthropic’s stand belongs in that company depends on what comes next — but the structural parallel is real.
A Coalition at the Fault Line
What made February 27 remarkable was not only Amodei’s refusal but the response it generated. Sam Altman — Amodei’s most direct competitor, the CEO of OpenAI — publicly sided with Anthropic, questioning the Pentagon’s approach and affirming that the AI field broadly shares the same red lines. Employees from OpenAI and Google signed an open letter stating that the Pentagon was now negotiating with their companies to obtain what Anthropic refused to provide. Bipartisan congressional voices raised concerns. Former leader of the Defense Department’s AI initiatives, retired Air Force Gen. Jack Shanahan wrote in a social media post, “Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”
That coalition is meaningful in ways that deserve emphasis. The AI industry is defined by ferocious competition — for talent, for compute, for enterprise contracts, for the ear of policymakers. Anthropic and OpenAI are not allies; they are rivals in one of the most consequential commercial races in history. For Sam Altman to publicly defend Amodei’s stand, and for employees of OpenAI and Google to sign letters in solidarity with a competitor’s refusal, is not a routine industry response. It is something closer to a “conscience” moment — a recognition, cutting across competitive self-interest, that some principles matter more than market position. That kind of solidarity is rare precisely because the incentives run so strongly against it. It formed in real time, across normal fault lines, around a principle rather than a constituency. Small voices aggregating at inflection points like this one have, historically, shaped the trajectory of debates that initially seemed settled by power. I am one of those small voices. But I have been watching this landscape for long enough to recognize when a moment is genuinely pivotal, and this is one.
What Governance Must Do
The lesson of February 27 is not simply that Anthropic was right and the administration was wrong, though I believe that to be true. The deeper lesson is structural: we do not have the governance frameworks necessary to prevent this kind of confrontation from recurring, and to ensure that the next company facing similar pressure has institutional backstops rather than solitude.
Congress needs to define what constitutes permissible military AI — drawing a clear statutory line between advisory analytics, where machine assistance enhances human judgment, and autonomous weaponry, where it replaces it. The current ambiguity is not neutral; it systematically favors those who wish to expand AI’s operational role without accountability. Alongside that, an independent, bipartisan AI ethics body with genuine authority to review and certify defense AI systems would ensure that “lawful use” is measured against human rights standards rather than left to the contracting party’s interpretation. Companies that maintain robust, audited safety policies should receive meaningful safe harbor from compulsory use orders — not as a reward for compliance theater, but as an incentive structure that makes principled development economically sustainable. And any DPA order compelling AI deployment must be accompanied by full transparency reporting, so that oversight bodies and the public can see what is being done in their name.
These are not radical proposals. They are the minimum infrastructure of a governance system serious about the technology it is trying to manage. The fact that they do not yet exist is the governance gap I have been writing about for years. February 27 made that gap visible to everyone.
The Question That Remains
Dario Amodei stood on conscience when the institutional architecture that should have supported him was absent. That is the condition we are in: a world where the most consequential technology in human history is being governed, to the extent it is governed at all, by the willingness of individuals to hold lines that power finds inconvenient.
I am a proponent of progress in peace. I believe AI can be one of the great forces for human flourishing in this century, and I have devoted serious analytical effort to thinking about how that becomes possible rather than merely aspirational. But progress built on the abandonment of principled limits is not progress. It is acceleration without direction, and the history of such accelerations is not encouraging.
The legal challenge ahead will be worth watching closely. If Anthropic prevails, the ruling could establish a precedent protecting corporate safety commitments from unilateral executive override — a modest but meaningful institutional anchor in a landscape that badly needs them. If the administration prevails, it will have established that principled limits on frontier AI are negotiable whenever national security is invoked, a claim with no logical stopping point.
The right, in this instance, is holding against the might. Whether it holds long enough to matter is the question of our moment.