AiPhreaks ← Back to News Feed

Statement from Dario Amodei on our discussions with the Department of WarA statement from our CEO on national security uses of AI.

By Jakub Antkiewicz

2026-02-28T08:29:06Z

Anthropic, a leading developer of artificial intelligence systems, announced it is resisting demands from the U.S. Department of War to remove safeguards on its AI models. In a public statement, CEO Dario Amodei said the company will not permit its technology to be used for mass domestic surveillance or fully autonomous weapons, setting up a direct conflict with a key government partner. The Department has reportedly threatened to terminate its relationship with the company, label it a “supply chain risk,” and invoke the Defense Production Act if it does not comply with demands for “any lawful use” of its technology.

The dispute comes despite Anthropic’s deep integration within the U.S. national security apparatus, where its Claude models are deployed on classified networks for intelligence analysis, operational planning, and cyber operations. Amodei argued that while the company is committed to defending the U.S., mass domestic surveillance is “incompatible with democratic values,” and current AI is not “reliable enough to power fully autonomous weapons.” The statement notes that Anthropic has previously forgone hundreds of millions in revenue by cutting off access to firms linked to the Chinese Communist Party, framing its current position as a consistent application of its principles.

This public standoff marks a critical moment for the AI industry's relationship with the military. Anthropic's decision to draw a line on specific applications could influence how other AI developers, including Google and OpenAI, navigate military contracts and the ethical boundaries of their technology. The outcome may set a precedent for whether AI companies can enforce use-case restrictions on government clients or if federal agencies will favor providers who offer unrestricted access, potentially fragmenting the defense AI market and intensifying the debate over responsible AI deployment in national security.

This conflict moves the debate over AI ethics from corporate policy papers to direct confrontation with a major military power. The central issue is now who ultimately controls the application of powerful AI systems: the technology's creators, who understand its limitations and risks, or the state, which holds the authority for national defense.