AiPhreaks ← Back to News Feed

Where things stand with the Department of WarAnnouncementsMar 5, 2026A statement from Dario Amodei.

By Jakub Antkiewicz

2026-03-16T08:55:38Z

Anthropic confirmed it has been formally designated a national security supply chain risk by the U.S. Department of War, a move the AI company plans to challenge in court. The designation, communicated in a letter on March 4, escalates the public friction between the prominent AI developer and a critical government partner. This conflict centers on the application of Anthropic's technology in military contexts and could establish a significant precedent for how AI companies' internal safety policies interact with government procurement and national security mandates.

In a statement, CEO Dario Amodei argued the designation is legally unsound and narrowly focused, asserting it only applies to the direct use of its Claude model within Department of War contracts, not all uses by government contractors. He referenced the relevant statute, 10 USC 3252, as being protective rather than punitive. Amodei also apologized for a leaked internal post written in the immediate aftermath of the government's initial announcements, which included a Pentagon deal with rival OpenAI, calling its tone unconsidered and out-of-date. To prevent disruption, Anthropic has offered to provide its models to the military at a nominal cost during a transition period.

This dispute brings the tension between AI safety principles and military operational requirements into sharp relief. Anthropic has consistently cited its policies against use in fully autonomous weapons and mass domestic surveillance as its primary concerns. The company's decision to pursue legal action, rather than just accept the contractual loss, signals a foundational disagreement over the role of private technology firms in military decision-making. The outcome could influence future government AI procurement strategies and force other AI labs to clarify their own ethical boundaries when engaging with defense and intelligence agencies.

The legal and public confrontation between Anthropic and the Department of War serves as a critical test case, forcing the AI industry and government to define the operational limits of ethical red lines in national security applications.