Israel’s AI Revolution – Part 3 – Killer Robots and the Race to Regulate: Why the World Can’t Agree on Autonomous Weapons

Autonomous weapons are reshaping warfare faster than diplomats can respond—here’s why a global ban remains elusive as of late March 2026.

Introduction

In March 2020, a Kargu-2 drone, deployed by Libyan government forces, hunted down and attacked a human target without any human operator pulling the trigger. A subsequent report by the United Nations Security Council described this as the first recorded instance of a lethal autonomous weapon engaging a human target in combat. Produced by STM, a Turkish defense contractor, the Kargu-2 used artificial intelligence and facial recognition to select and engage its target—a stark demonstration of how rapidly autonomous weapons were transitioning from science fiction to reality.

Fast forward to today, and autonomous weapons are no longer a futuristic scenario. Israel’s AI targeting systems in Gaza, U.S. drone strikes in the Middle East, and China’s AI-driven surveillance in Xinjiang are just a few examples of how machines are already reshaping modern warfare. Yet despite their rapid advancement, the world remains deeply divided over how—or even whether—to regulate these technologies. Proponents argue that these systems can reduce risks to soldiers, improve precision, and provide strategic advantages. Critics warn of accountability gaps, ethical nightmares, and a new arms race that could spiral out of control. With no binding international treaty in sight, the question looms: Is a global ban on autonomous weapons possible, or is the genie already out of the bottle?

In the two years since this article was first drafted, the urgency has only grown. The United Nations has missed its own deadline for a treaty, major powers have accelerated their development of AI-driven warfare, and civil society warnings have reached new heights. In early 2026 alone, the U.S. government tried to coerce an AI company into removing safety guardrails and was blocked by a federal judge; Ukraine became the first nation to open real combat data to international partners for training autonomous systems; and diplomatic support for a binding instrument grew from roughly 40 states to over 70 in a single week of negotiations. The question is no longer whether autonomous weapons will reshape warfare—they already have. The question is whether any institution, anywhere, will establish meaningful rules before the window closes. This account reflects the state of play as of late March 2026.

Section 1: What Are Autonomous Weapons?

Autonomous weapons are systems capable of selecting and engaging targets without human intervention. The degree of autonomy varies, ranging from semi-autonomous systems, where humans remain “in the loop” to make final decisions, to fully autonomous systems, where machines operate independently. Semi-autonomous systems, such as the U.S. Reaper drones or Israel’s Lavender and The Gospel AI targeting tools, require human approval for lethal action but rely on algorithms to identify and prioritize targets. Fully autonomous systems, like the Kargu-2 drone used in Libya or Israel’s Harpy loitering munition, can select and engage targets without any human input. Meanwhile, AI-driven surveillance systems, such as China’s facial recognition technology in Xinjiang or Israel’s surveillance infrastructure in Gaza and the West Bank, identify and track targets but still require human authorization for lethal force.

The use of these systems is already widespread. Israel’s Lavender and The Gospel tools, for example, generate hundreds of targets per day in Gaza, far outpacing the capacity of human analysts. According to reports from +972 Magazine and Human Rights Watch, The Gospel can produce over 100 bombing targets daily, while human analysts might generate only 50 in a year. These tools have been linked to civilian harm, with reports of misidentifications and unintended casualties. Similarly, the U.S. has conducted thousands of drone strikes in Pakistan, Yemen, and Somalia since 2001, often relying on “signature strikes” that target individuals based on patterns of behavior rather than confirmed identities. This approach has resulted in high civilian casualties, as documented by The Bureau of Investigative Journalism. China, meanwhile, has deployed AI-driven surveillance systems in Xinjiang, including facial recognition and predictive policing tools capable of monitoring populations at scale. Finally, both Russia and Ukraine have deployed autonomous drones in their ongoing conflict, with both sides increasingly relying on AI-driven systems for reconnaissance, targeting, and strike operations.

Section 2: The Divide—Arguments For and Against Autonomous Weapons

Proponents of autonomous weapons, primarily militaries and defense contractors, argue that these systems offer significant strategic and humanitarian benefits. The broader argument is that precision-guided autonomous systems, by narrowing the scope of engagement, could reduce the indiscriminate destruction associated with conventional bombardment. Additionally, autonomous systems can keep soldiers out of harm’s way by replacing them in dangerous missions, as demonstrated by the Kargu-2 drone in Libya, which allowed forces to engage enemies without risking pilots. Proponents also assert that nations failing to develop autonomous weapons risk falling behind adversaries, pointing to the ongoing arms race among the U.S., China, and Russia to dominate AI-driven warfare.

Critics, however, including human rights groups, ethicists, and some governments, highlight significant ethical, legal, and strategic risks. One of the most pressing concerns is the issue of accountability. If an autonomous weapon kills civilians, it is unclear who bears responsibility: the programmer, the military commander, or the machine itself. This problem is exemplified by Israel’s Lavender system. According to reports from +972 Magazine, Lavender had a reported 10% error rate in identifying targets—and with The Gospel generating over 100 bombing targets per day, that error rate translates to potentially dozens of misidentified targets weekly. Human analysts reviewing Lavender’s recommendations reportedly spent as little as 20 seconds per target before approving strikes. Yet no one has been held accountable for the resulting misidentifications and civilian deaths. Ethical concerns also abound, as machines lack moral reasoning and empathy. The history of drone warfare is littered with catastrophic errors when targeting decisions rely on algorithmic pattern-matching rather than confirmed intelligence—a record examined more fully in Section 4. Critics further warn that the proliferation of autonomous weapons could destabilize global security, as dozens of countries, including North Korea and Iran, are now developing such systems. Finally, there is the risk that autonomous weapons could lower the threshold for war by making conflict seem less costly, both politically and in terms of human life.

But recent combat experience suggests the picture is more complicated than either side acknowledges.

The Red Sea “Human Factor” Paradox

While much of the debate focuses on machines replacing humans, recent combat experience has revealed an unexpected twist. In March 2026, French Navy Chief of Staff Admiral Nicolas Vaujour explained that the intensity of drone and missile threats in the Red Sea has forced his service to increase crew sizes on frigates. Western navies had spent years trying to reduce crews through automation, but the reality of high-intensity conflict proved that more humans were needed to maintain resilience and manage the overwhelming volume of data generated by AI-enabled sensors. Admiral Vaujour noted that the ships were found to be “light in terms of human resource capacity” when dealing with such complex, multi-domain threats. His comments underscore a critical nuance: even as autonomous systems proliferate, the human role in war may not simply shrink—it may shift and, in some contexts, grow.

Section 3: Global Movements—The Fight to Regulate Autonomous Weapons

Efforts to regulate autonomous weapons have gained traction through initiatives like the Campaign to Stop Killer Robots, a coalition of over 270 non-governmental organizations across 70 countries. This group advocates for a preemptive ban on fully autonomous weapons, arguing that machines should never make life-and-death decisions, that autonomous weapons violate international law, and that delegating killing to machines dehumanizes warfare. While the campaign has achieved some successes—such as gaining support from over 30 countries including Austria, Brazil, and New Zealand, and prompting the European Parliament to call for a moratorium on lethal autonomous weapons—it faces significant obstacles. Major powers like the U.S., China, Russia, Israel, and the UK oppose a ban, citing strategic interests, and lobbying by defense contractors such as Lockheed Martin and Northrop Grumman further complicates progress.

The United Nations has held annual discussions on Lethal Autonomous Weapons Systems (LAWS) since 2014 through the Convention on Certain Conventional Weapons (CCW), but progress has been slow due to geopolitical rivalries and a lack of consensus. Key debates center on whether there should be a total ban or merely regulations requiring human oversight, and how to define “meaningful human control.” The 2023 UN report called for urgent action but stopped short of recommending a ban, and earlier GGE meetings failed to reach consensus even on basic definitions.

Recent Developments (2025–2026):

In early 2026, the diplomatic landscape grew even more fragmented. The UN Secretary-General had set a target of reaching a legally binding instrument by 2026, but that deadline has been missed. During the first 2026 session of the Group of Governmental Experts (GGE) on LAWS (March 2–6 in Geneva), delegations conducted a first reading of the Chair’s rolling draft text of elements for a possible instrument. Support for using the text as the basis for formal negotiations grew significantly—from over 40 states at the start of the week to over 70 by the end, including a group of African states. A revised version of the text was circulated mid-session, and discussions continued on key boxes covering prohibitions, regulations, and human responsibility. Another GGE session is scheduled for August 31–September 4, 2026, ahead of the CCW Seventh Review Conference in November 2026, where states could decide on launching negotiations for a binding protocol—or see momentum shift toward a potential “breakaway treaty” outside the consensus-bound CCW framework, similar to the 1997 Ottawa landmine ban. A November 2025 UNGA resolution supported by 156 states already urged completing the elements “with a view to future negotiation,” reflecting frustration with slow progress. Notably, China abstained on that resolution, while the United States and Russia voted against it—a distinction worth noting, as China’s position, while obstructive in practice, is more nuanced than outright opposition.

Meanwhile, a new geopolitical dynamic emerged at the REAIM 2026 Summit (Responsible AI in the Military Domain), held in A Coruña, Spain, on February 4–5, 2026. Eighty-five states attended, but only about 35–40 endorsed the final “Pathways to Action” outcome document—down from over 60 at the 2024 summit. Notably, neither China nor the United States endorsed it, marking the first REAIM outcome without consistent superpower support. The document focused on pragmatic implementation steps such as testing, evaluation, procurement guidelines, and human responsibility, rather than new binding rules. The Netherlands and South Korea, as middle-power initiators, continued to sustain the platform, but observers noted that the U.S. and China have become increasingly aloof in international treaty talks. Dutch Defense Minister Kajsa Ollongren framed the dilemma: governments are trapped between wanting responsible limits and not wanting to be constrained in competition with rivals like Russia and China. The future of REAIM is uncertain; analysts suggest any fourth summit will need to move beyond outcome documents to concrete implementation.

The Anthropic–Pentagon Showdown

A dramatic clash between an AI company and the U.S. government in early 2026 added a new dimension to the governance debate—and, as of this writing, has produced the first judicial ruling on the constitutional limits of the executive’s power to coerce AI companies on safety policy.

In September 2025, Anthropic—the first AI lab to deploy its technology across the Pentagon’s classified networks, under a $200 million contract signed that July—began negotiating the deployment of its Claude AI model on the Department of War’s GenAI.mil platform. Talks stalled over two red lines Anthropic refused to cross: it would not allow Claude to be used for mass surveillance of U.S. citizens or for fully autonomous lethal weapons. The Pentagon insisted on unfettered access for “all lawful purposes.”

When negotiations failed, the confrontation escalated rapidly. On February 25, Anthropic CEO Dario Amodei wrote publicly that he “cannot in good conscience accede” to the Pentagon’s demands, arguing that autonomous weapons using current AI are “simply not reliable enough” and that the Pentagon had refused his offer to collaborate on improving reliability. Defense Secretary Pete Hegseth threatened to designate Anthropic a “supply chain risk”—a label previously reserved for companies connected to foreign adversaries—and prepared to invoke the Defense Production Act to compel compliance. On February 26, President Trump announced on Truth Social that the government would instead work with OpenAI, ordering federal agencies to cease using Anthropic’s technology and describing the company as “far-Left.”

On March 4, Hegseth formally designated Anthropic a supply chain risk, requiring defense contractors—including Amazon, Microsoft, and Palantir—to certify they did not use Claude in their work with the military. Anthropic sued the Trump administration on March 8, filing in both the Northern District of California and the D.C. Circuit Court of Appeals, arguing the designation punished the company for exercising its First Amendment rights by raising safety concerns.

The case drew unusual support. Microsoft and retired U.S. military leaders filed briefs backing Anthropic. Dozens of researchers at OpenAI and Google DeepMind—Anthropic’s direct competitors—filed an amicus brief in their personal capacities, arguing that the designation could harm U.S. competitiveness and chill public discussion of AI risks. A group of Catholic theologians also filed in support.

At the March 23 hearing, U.S. District Judge Rita Lin pressed the government on its rationale. The Pentagon’s lawyer argued that Anthropic was not simply refusing contract terms but “raising concerns to the Department of War about how the Department of War uses its technology in military missions”—framing the act of raising safety concerns as itself a form of threat. The government claimed it had “come to worry that Anthropic may in the future take action to sabotage or subvert IT systems.”

On March 26, Judge Lin issued a preliminary injunction blocking both the supply chain risk designation and Trump’s directive banning federal agencies from using Claude. In a 43-page ruling, she called the measures “likely both contrary to law and arbitrary and capricious” and wrote that nothing in the governing statute supports “the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.” She found the designation likely violated Anthropic’s First Amendment and due process rights, describing the government’s actions as “classic First Amendment retaliation” that appeared “designed to punish Anthropic” rather than address any genuine security concern. The ruling is delayed one week to allow the government to appeal.

The standoff prompted congressional action. On March 17, Senator Elissa Slotkin introduced the AI Guardrails Act, which would legally prohibit the Pentagon from using autonomous weapons to kill without human authorization, ban AI-driven mass surveillance of Americans, and ban AI-controlled nuclear launches (see Section 5).

Section 4: Case Study—Autonomous Weapons in Action

The real-world application of autonomous weapons is already evident in several conflicts. In Gaza, Israel’s use of AI systems like Lavender and The Gospel has raised concerns about civilian casualties and lack of transparency. Reports from Human Rights Watch and +972 Magazine document families killed in strikes on residential buildings, while Israel’s classification of most details about its AI systems makes independent oversight nearly impossible. What reporting has emerged paints a troubling picture. According to +972 Magazine, Lavender was used to identify targets who were then tracked and struck in their homes during nighttime hours—when their families were likely to be present. The system’s scale and speed fundamentally altered the relationship between intelligence and action: rather than analysts building cases against individual targets over weeks or months, an algorithm generated lists of names that human operators approved in seconds. The result, according to Human Rights Watch, was a pattern of strikes on residential buildings that killed entire families alongside their intended targets—if, in fact, the intended targets were correctly identified at all. Ethical questions persist about whether algorithm-driven warfare can comply with International Humanitarian Law.

The U.S. provides another critical case study through its use of semi-autonomous drones in the Middle East. Predator and Reaper drones, which require human approval for strikes, have been used in thousands of operations in Pakistan, Yemen, and Somalia. However, the reliance on “signature strikes”—targeting individuals based on patterns of behavior rather than confirmed identities—has led to high civilian casualties, as documented by The Bureau of Investigative Journalism. The consequences of this approach were starkly illustrated in August 2021, when a U.S. drone strike in Kabul killed 10 civilians, including seven children, after misidentifying an aid worker’s vehicle as a threat. The strike, later investigated by The New York Times, revealed a chain of algorithmic assumptions and confirmation biases that compounded into a catastrophic error. The U.S. military initially defended the strike before acknowledging the mistake, and no one was disciplined. The lack of accountability is a persistent issue, with the U.S. rarely investigating or acknowledging civilian harm.

China’s use of AI-driven surveillance in Xinjiang offers a different but equally concerning example. Facial recognition, predictive policing, and autonomous drones enable mass surveillance and repression of Uyghur Muslims, facilitating mass detentions and forced labor. China has also exported this surveillance technology to authoritarian regimes in Myanmar, Zimbabwe, and Venezuela—raising the prospect of a global proliferation of AI-enabled repression beyond the battlefield.

Ukraine’s Battlefield Data Initiative

In a major development this month, Ukraine became the first nation to open its real combat data to international partners and defense companies. On March 12, 2026, the Ukrainian government announced an initiative granting access to millions of annotated frames from over five million drone missions, continuously updated as the war enters its fifth year. Partners can train AI models on this data without accessing Ukraine’s sensitive DELTA battlefield management system, using a dedicated platform operated by the Ministry of Defense’s Center for Innovation and Development of Defense Technologies.

Ukrainian Defense Minister Mykhailo Fedorov explained the rationale: “The future of warfare belongs to autonomous systems. Our objective is to increase the level of autonomy in drones so they can detect targets faster, analyze battlefield conditions, and support real-time decision-making.” Deputy Minister Yuriy Myronenko added, “You can control only with data. Otherwise, I don’t even know how you can control such a number of drones, people, front lines, and such a number of resources. And so fast.” Under the framework, partners can train their own AI models and codevelop new technological solutions using the data, but cannot take direct possession of the raw video—a constraint that reflects Ukraine’s attempt to maintain some control over how the material is used even as it opens unprecedented access. The initiative effectively turns Ukraine into a global hub for AI warfare innovation, accelerating the development of autonomous systems far faster than laboratory simulations could.

Finally, the conflict between Russia and Ukraine also highlights escalation risks, with both sides using autonomous drones for reconnaissance and strikes. Ukraine’s Delta software employs AI to predict and target Russian positions, raising questions about whether autonomous systems lower the threshold for lethal force and increase the potential for unintended escalation.

Section 5: The Reader Question—Is a Global Ban Possible?

The question of whether a global ban on autonomous weapons is achievable remains contentious. History offers some cause for optimism, as past bans on anti-personnel landmines, chemical weapons, and nuclear testing demonstrate that international agreements are possible when civil society, governments, and experts align. The Ottawa Treaty of 1997, which banned landmines and was adopted by 164 countries, and the Chemical Weapons Convention of 1993, which outlawed chemical weapons with 193 signatories, serve as precedents. A similar movement could emerge for autonomous weapons if public pressure grows, middle-power countries take the lead, and the military advantages of a ban become clear.

However, autonomous weapons present unique challenges that make a ban difficult to achieve. Major powers such as the U.S., China, Russia, and Israel see strategic value in AI-driven warfare and are unlikely to support a treaty that limits their capabilities. Additionally, the dual-use nature of many autonomous systems—such as drones and AI targeting tools, which have civilian applications—complicates enforcement. Lobbying by defense industries, which profit from autonomous weapons, further resists regulation.

New Obstacles and Shifting Dynamics (2025–2026):

In January 2026, the Department of War (a secondary title adopted by executive order in September 2025; Congress has not formally renamed the Department of Defense) issued an AI Acceleration Strategy aimed at establishing “America’s Military AI Dominance.” The strategy announced seven Pace-Setting Projects designed to field autonomous systems faster than any previous Pentagon initiative. Among the most revealing: Ender’s Foundry, named after the science fiction novel in which children unknowingly fight real wars through simulations, would build AI-enabled simulation capabilities with feedback loops between virtual and operational environments to “outpace adversaries.” Open Arsenal would accelerate the pipeline from technical intelligence to executable weapons code—compressing what once took years into hours. Agent Network would deploy AI agents across the full spectrum of battle management, from campaign planning to kill chain execution. And GenAI.mil would give the entire department access to frontier models from Google and xAI.

Taken together, the Pace-Setting Projects represent something more than incremental modernization. They are a formal policy commitment to embed AI decision-making across the full arc of warfare—from identifying threats to executing strikes—at a pace that outstrips any existing regulatory framework, domestic or international.

This acceleration triggered a domestic counter-move. In March 2026, following the Anthropic–Pentagon standoff, Senator Elissa Slotkin introduced the AI Guardrails Act, which would legally prohibit the Pentagon from using autonomous weapons to kill without human authorization and would ban AI-controlled nuclear launches. Representative Sam Liccardo introduced a separate Defense Production Act amendment to prohibit federal agencies from retaliating against technology vendors that limit deployment to mitigate risk, though it failed on a party-line vote. Senator Adam Schiff signaled he would introduce additional legislation and was considering inclusion in the National Defense Authorization Act. The bills set up a direct legislative-executive conflict over how—and whether—to constrain autonomous weapons.

If a total ban proves unattainable, strong regulations could offer a middle ground. The GGE’s rolling draft text, with its specific elements covering prohibitions, regulations, and human responsibility, represents the most concrete framework currently on the table. The growth from 40 to over 70 states backing this text as a basis for negotiations during the March 2026 session suggests that the contours of a possible instrument are becoming clearer, even as the political will of major powers remains uncertain. Yet even incremental measures—mandatory human oversight for all lethal decisions, transparency requirements such as public reporting on AI use in conflicts, and independent audits of military AI systems—face challenges, including enforcement difficulties and the potential for loopholes that allow countries to circumvent the rules.

Conclusion: The Clock Is Ticking

The debate over autonomous weapons is no longer theoretical. Systems like Lavender, Reaper drones, and the Kargu-2 are already in use, reshaping warfare in ways that outpace the laws meant to govern them. The world now stands at a crossroads. Will autonomous weapons be allowed to proliferate, risking a future where machines decide who lives and dies? Or can meaningful rules be established to ensure accountability, transparency, and ethical use?

The Campaign to Stop Killer Robots, incremental progress in the CCW GGE (including growing support for negotiations), UN discussions, and public pressure offer glimmers of hope. The upcoming August–September GGE session and November 2026 CCW Review Conference represent critical windows for deciding whether to launch formal talks on a binding instrument—or risk further fragmentation. Yet without urgent action, the risks—unaccountable killings, global arms races, and eroded human dignity—will only grow. In January 2026, the Bulletin of the Atomic Scientists set the Doomsday Clock to 85 seconds to midnight—the closest it has ever been—explicitly citing military applications of AI and autonomous systems among the disruptive technologies driving global instability. The International Committee of the Red Cross has urged states to conclude a legally binding treaty by 2026, warning that “machines with the power and discretion to take lives without human involvement should be prohibited by international law”—a call first issued jointly with the UN Secretary-General in 2023 that remains unmet.

Major powers are racing ahead with development and deployment, even as middle powers and civil society push back. The stakes could not be higher, and the time to act is now.

In the span of a single month, a federal judge ruled that the U.S. government cannot coerce an AI company into removing safety guardrails. A senator introduced legislation to ban autonomous kill decisions without human authorization. Seventy nations backed the framework for a binding treaty. And Ukraine opened the most comprehensive real-world combat dataset in history to train the next generation of autonomous weapons. These are not contradictions. They are the simultaneous, competing impulses of a world that knows it needs rules but cannot stop building the things those rules are meant to govern.


Sources Cited

1. +972 Magazine. (2024, April 3). ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. Retrieved from https://www.972mag.com/lavender-ai-israeli-army-gaza/

2. Human Rights Watch. (2024, September 10). Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza. Retrieved from https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza

3. Lieber Institute (West Point). (2024, September 6). The Gospel, Lavender, and the Law of Armed Conflict. Retrieved from https://lieber.westpoint.edu/gospel-lavender-law-armed-conflict/

4. Foreign Policy. (2024, May 4). Israel’s Algorithmic Killing of Palestinians Sets Dangerous Precedent. Retrieved from https://foreignpolicy.com/2024/05/02/israel-military-artificial-intelligence-targeting-hamas-gaza-deaths-lavender/

5. Campaign to Stop Killer Robots. About Autonomous Weapons. Retrieved from https://www.stopkillerrobots.org/ Note: The Campaign to Stop Killer Robots is an international coalition of over 270 NGOs operating in over 70 countries.

6. United Nations. Lethal Autonomous Weapons Systems (LAWS). Retrieved from https://www.un.org/disarmament/topics/autonomousweapons/

7. The Bureau of Investigative Journalism. Drone Warfare. Retrieved from https://www.thebureauinvestigates.com/

8. Human Rights Watch. China: AI-Powered Repression in Xinjiang. Retrieved from https://www.hrw.org/asia/china-and-tibet

9. The New York Times. (2021, September 10). Times Investigation: In U.S. Drone Strike, Only the Children Died. Retrieved from https://www.nytimes.com/2021/09/10/world/asia/us-drone-strike-afghanistan.html

10. UN Security Council. (2021). Report on Libya. Retrieved from https://undocs.org/S/2021/229

11. U.S. Department of Defense. (2023). Policy on Autonomous Weapons. Retrieved from https://www.defense.gov/

12. European Parliament. (2023). Resolution on Lethal Autonomous Weapons. Retrieved from https://www.europarl.europa.eu/

13. Naval News. (2026, March 16). Red Sea Crisis Reinforced Need to be Ready at Sea, say Western Navy Chiefs. Retrieved from https://www.navalnews.com/naval-news/2026/03/red-sea-crisis-reinforced-need-to-be-ready-at-sea-say-western-navy-chiefs/ Source note: French Navy Chief of Staff Admiral Nicolas Vaujour explained that the intensity of drone and missile threats in the Red Sea forced an increase in crew sizes on frigates, as ships were found to be “light in terms of human resource capacity” when dealing with complex, multi-domain threats.

14. REAIM 2026 Summit. (2026, February 4–5). Chair’s Summary: Responsible AI in the Military Domain. A Coruña, Spain. Retrieved from https://www.government.nl/events/reaim-2026 Source note: Eighty-five states attended, but only about 35–40 endorsed the final outcome document—down from over 60 in 2024. Neither China nor the United States endorsed, marking the first REAIM outcome without superpower support.

15. Associated Press. (2026). Anthropic–Pentagon Dispute Coverage. The following articles document the full timeline of the dispute:

• Baldor, L. C., & O’Brien, M. (2026, February 23). Hegseth warns Anthropic to let the military use the company’s AI tech as it sees fit, AP sources say. Retrieved from https://apnews.com/article/anthropic-hegseth-ai-pentagon-military-3d86c9296fe953ec0591fcde6a613aba

• O’Brien, M., & Baldor, L. C. (2026, February 25). Anthropic CEO says it ‘cannot in good conscience accede’ to Pentagon’s demands for AI use. Retrieved from https://apnews.com/article/anthropic-ai-pentagon-hegseth-dario-amodei-9b28dda41bdb52b6a378fa9fc80b8fda

• Grantham-Philips, W. (2026, February 25). What to know about Defense Production Act and the Pentagon’s Anthropic ultimatum. Retrieved from https://apnews.com/article/anthropic-military-ai-hegseth-department-of-defense-f05674f7195051ab843e5087d12c8cf8

• Baldor, L. C., & O’Brien, M. (2026, February 26). Trump orders US agencies to stop using Anthropic technology in clash over AI safety. Retrieved from https://apnews.com/article/anthropic-pentagon-ai-hegseth-dario-amodei-b72d1894bc842d9acf026df3867bee8a

• Baldor, L. C., & O’Brien, M. (2026, March 4). Pentagon says it is labeling AI company Anthropic a supply chain risk ‘effective immediately’. Retrieved from https://apnews.com/article/pentagon-ai-anthropic-claude-dario-amodei-openai-d4608c7dd139245ac8ad94d5427c505a

• O’Brien, M., & Baldor, L. C. (2026, March 8). AI company Anthropic sues Trump administration seeking to undo ‘supply chain risk’ designation. Retrieved from https://apnews.com/article/anthropic-trump-pentagon-hegseth-ai-104c6c39306f1adeea3b637d2c1c601b

• O’Brien, M., & Baldor, L. C. (2026, March 10). Microsoft and retired military chiefs back AI company Anthropic in court fight against Pentagon. Retrieved from https://apnews.com/article/trump-anthropic-ai-microsoft-pentagon-c4210e7eddd9ad90161e7fa2da9736e2

• O’Brien, M. (2026, March 23). Anthropic and Pentagon head to court as AI firm seeks end to ‘stigmatizing’ supply chain risk label. Retrieved from https://apnews.com/article/anthropic-pentagon-supply-chain-risk-1c8955eccab9f6f40de5f9897118ac32

• O’Brien, M., & Liedtke, M. (2026, March 26). Federal judge temporarily blocks the Pentagon from branding AI firm Anthropic a supply chain risk. Retrieved from the AP via multiple outlets. Key details: Anthropic CEO Dario Amodei refused Pentagon demands to remove safeguards against mass surveillance of U.S. citizens and fully autonomous lethal weapons, stating that autonomous weapons using current AI are “simply not reliable enough.” Defense Secretary Pete Hegseth designated the company a “supply chain risk.” President Trump ordered federal agencies to phase out Anthropic’s technology. On March 26, U.S. District Judge Rita Lin issued a preliminary injunction blocking the designation and Trump’s directive, calling the measures “classic First Amendment retaliation.”

16. Ministry of Defence of Ukraine. (2026, March 12). Ukraine Launches Battlefield Data Initiative for International Partners. Retrieved from https://mod.gov.ua/en/news/ukraine-is-the-first-country-in-the-world-to-open-real-battlefield-data-to-partners-for-ai-model-training Alternate source: Reuters. (2026, March 12). Ukraine opens battlefield data to international partners to train AI for drones. Retrieved from https://www.reuters.com/world/europe/ukraine-opens-battlefield-data-international-partners-train-ai-drones-2026-03-12/ Source note: Ukraine became the first nation to open real combat data—millions of annotated frames from over five million drone missions—to international partners and defense companies. Defense Minister Mykhailo Fedorov stated: “The future of warfare belongs to autonomous systems.” Partners can train AI models on the data but cannot take direct possession of the raw video.

17. U.S. Department of War. (2026, January 9–12). Artificial Intelligence Strategy for the Department of War [Memorandum and press release]. The strategy memorandum was issued January 9, 2026; Secretary Hegseth’s public speech followed on January 12, 2026. Full strategy document available at: https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ARTIFICIAL-INTELLIGENCE-STRATEGY-FOR-THE-DEPARTMENT-OF-WAR.PDF Press release: https://www.war.gov/News/Releases/Release/Article/4376420/war-department-launches-ai-acceleration-strategy-to-secure-american-military-ai/ Source note: The Department of War issued an AI Acceleration Strategy establishing “America’s Military AI Dominance” through seven Pace-Setting Projects (PSPs), each with a single accountable leader and aggressive timelines. Warfighting PSPs include Swarm Forge (pairing elite warfighting units with technology innovators), Agent Network (AI agents for battle management, from campaign planning to kill chain execution), and Ender’s Foundry (AI-enabled simulation with sim-dev and sim-ops feedback loops). Intelligence PSPs include Open Arsenal (compressing the intelligence-to-weapons-code pipeline from years to hours) and Project Grant (transforming deterrence from static postures to dynamic pressure). Enterprise PSPs include GenAI.mil (department-wide access to frontier models) and Enterprise Agents (rapid AI agent development for enterprise workflows).

18. Slotkin, E. (2026, March 17). AI Guardrails Act of 2026. 119th Cong. Press release: Slotkin, E. (2026, March 17). Slotkin Introduces AI Guardrails Act to Prevent Autonomous Lethal Weapons. Retrieved from https://www.slotkin.senate.gov/2026/03/17/slotkin-legislation-puts-common-sense-guardrails-on-dod-ai-use-around-lethal-force-spying-on-americans-and-nuclear-weapons/ Source note: Following the Anthropic–Pentagon dispute, Senator Elissa Slotkin introduced legislation to legally prohibit the Pentagon from using autonomous weapons to kill without human authorization, ban AI-driven mass surveillance of Americans, and ban AI-controlled nuclear launches.

19. Reaching Critical Will (RCW) / WILPF CCW Report, Vol. 14, No. 2 (published March 10–11, 2026): Detailed coverage of the March 2–6, 2026 GGE session in Geneva. Notes that support for using the Chair’s rolling draft text as the basis for formal negotiations grew from over 40 states at the start of the week to over 70 by the end (including additional African states). Discusses the first reading of the rolling text (divided into “boxes”), the Chair’s revised text circulated mid-session, and the context of the three-year mandate ending with the November 2026 CCW Review Conference.

20. Stop Killer Robots Advocacy Sheet and related updates (March 2026): Provides civil society perspective on the session, including calls for stronger prohibitions and human control. Also tracks the September 2025 joint statement (initially ~42 states) that fed into the 2026 momentum.

21. UNODA / CCW official documents (January–March 2026): Includes the aide-mémoire for the first 2026 session, provisional agenda, and references to the rolling text (December 2025 version and subsequent revisions). Chair is confirmed as H.E. Robert in den Bosch (Netherlands). The second session is scheduled for August 31–September 4, 2026.

22. Reuters coverage (March 3, 2026): Quotes Chair Robert in den Bosch on the urgency of progress, the realistic miss of the broader 2026 binding instrument target, and the potential for the November 2026 CCW Review Conference to decide on launching negotiations for a binding protocol.

23. Official REAIM 2026 Pathways to Action document (A Coruña Declaration, February 2026): The outcome document itself, endorsed by approximately 35–40+ states. It emphasizes pragmatic steps (testing, evaluation, procurement, human responsibility) rather than binding rules. Neither the U.S. nor China endorsed it.

24. Spanish Ministry of Foreign Affairs / REAIM site and related coverage: Confirms attendance (~85 states) and the drop in endorsements compared to prior years, highlighting middle-power leadership (Netherlands, South Korea, Spain) amid superpower aloofness.

25. November 2025 UNGA Resolution (A/RES/80/57 or L.41): Adopted with 156 states in favor. Urges the CCW GGE to complete elements “with a view to future negotiation,” reflecting frustration with the consensus process and supporting potential breakaway or accelerated paths.

26. Red Sea / French Navy: Naval News, March 16, 2026, corroborated by Defense News and other reporting on Adm. Nicolas Vaujour’s comments about increasing crew sizes due to high-intensity drone/missile threats overwhelming automation expectations.

27. Ukraine Battlefield Data Initiative (March 12, 2026): Confirmed across Reuters, Military Times, and Ukrainian MoD statements quoting Ministers Fedorov and Myronenko.

28. CNN. (2026, March 26). Judge blocks Pentagon’s effort to ‘punish’ Anthropic by labeling it a supply chain risk. Retrieved from https://www.cnn.com/2026/03/26/business/anthropic-pentagon-injunction-supply-chain-risk Source note: U.S. District Judge Rita Lin issued a preliminary injunction blocking the Pentagon’s supply chain risk designation and President Trump’s directive banning federal agencies from using Claude. In a 43-page ruling, Lin called the measures “likely both contrary to law and arbitrary and capricious” and found they likely violated Anthropic’s First Amendment and due process rights, describing the government’s actions as “classic First Amendment retaliation.”

29. NPR. (2026, March 26). Judge temporarily blocks Trump administration’s Anthropic ban. Retrieved from https://www.npr.org/2026/03/26/nx-s1-5762971/judge-temporarily-blocks-anthropic-ban Source note: Lin wrote that the supply chain risk designation is usually reserved for foreign intelligence agencies and terrorists, not American companies, and that nothing in the statute supports “the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.” The injunction is delayed one week to allow the government to appeal.

30. CNBC. (2026, March 24). Judge presses DOD on why Anthropic was blacklisted: ‘That seems a pretty low bar.’ Retrieved from https://www.cnbc.com/2026/03/24/anthropic-lawsuit-pentagon-supply-chain-risk-claude.html Source note: At the March 23 hearing, the Pentagon’s lawyer argued that Anthropic was “raising concerns to DOD about how DOD uses its technology in military missions” and that the government had “come to worry that Anthropic may in the future take action to sabotage or subvert IT systems.”

31. CNN. (2026, March 9). Anthropic sues the Trump administration after it was designated a supply chain risk. Retrieved from https://www.cnn.com/2026/03/09/tech/anthropic-sues-pentagon Source note: Dozens of scientists and researchers at OpenAI and Google DeepMind filed an amicus brief in their personal capacities supporting Anthropic. Microsoft, retired U.S. military leaders, and a group of Catholic theologians also filed briefs in support.

32. Lieber Institute, West Point. (2026, March 27). Human Oversight with Chinese Characteristics: Lethal Autonomous Weapons in the CCW GGE. Retrieved from https://lieber.westpoint.edu/human-oversight-chinese-characteristics-lethal-autonomous-weapons-ccw-gge/ Source note: Analysis of China’s consistent position on meaningful human control. Notes that China abstained on the November 2025 UNGA resolution while the U.S. and Russia voted against it. Assesses no realistic prospect of Beijing shifting toward stricter norms.

33. The Hill. (2026, March 18). Slotkin introduces bill limiting Pentagon AI use. Retrieved from https://thehill.com/homenews/senate/5789815-ai-guardrails-act-pentagon/ Source note: Reports that Rep. Sam Liccardo introduced a Defense Production Act amendment to prohibit federal agencies from retaliating against tech vendors that limit deployment to mitigate risk; the amendment failed on a party-line vote. Sen. Adam Schiff signaled he would introduce separate legislation and was considering inclusion in the NDAA.

34. USAFacts. (2026, March 26). Department of Defense or Department of War: Which is it? Retrieved from https://usafacts.org/articles/department-of-defense-or-department-of-war-which-is-it/ Source note: Trump introduced “Department of War” as a secondary title via Executive Order 14347 on September 5, 2025. The DOD uses the name publicly and on its website, but Congress has not passed legislation to formally change the agency’s name.

35. Military Times. (2026, March 13). Ukraine opens battlefield AI data to allies in world-first move. Retrieved from https://www.militarytimes.com/flashpoints/ukraine/2026/03/13/ukraine-opens-battlefield-ai-data-to-allies-in-world-first-move/ Source note: Detailed reporting on the data-sharing framework, including the constraint that partners can train on the data but cannot take direct possession of the raw video.

36. DronXL. (2026, March 13). Fedorov’s Gamble: Can Open-Source Battlefield Data Defeat Russian Jamming? Retrieved from https://dronexl.co/2026/03/13/ukraine-ai-drone-training-dataset-fedorov/ Source note: Analysis of the dataset’s scope, including drone strike footage, surveillance video, and recordings of targets taking evasive action. Notes the ICRC’s opposition to lethal autonomous systems without meaningful human control.

37. Bulletin of the Atomic Scientists. (2026, January 27). 2026 Doomsday Clock Statement: Disruptive Technologies. Retrieved from https://thebulletin.org/doomsday-clock/2026-statement/disruptive-technologies/ Source note: The Bulletin’s disruptive technologies assessment cited AI applications in command and control, autonomous systems, cybersecurity, and digital forensics, as well as the revocation of the previous AI safety executive order.

38. UN News. (2023, October 5). UN and Red Cross call for restrictions on autonomous weapon systems to protect humanity. Retrieved from https://news.un.org/en/story/2023/10/1141922 Source note: Joint statement by UN Secretary-General António Guterres and ICRC President Mirjana Spoljaric calling on world leaders to negotiate a legally binding instrument on autonomous weapons by 2026. States that “machines with the power and discretion to take lives without human involvement should be prohibited by international law.”