New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput
By Jakub Antkiewicz
•2026-03-21T08:33:57Z
Anthropic escalated its legal fight with the Department of Defense, submitting sworn declarations that contradict the government’s central claims and reveal a top Pentagon official told the company they were “very close” to an agreement on key issues just one day after it was formally designated a national security risk. The filings, submitted ahead of a Tuesday court hearing, argue the Pentagon’s case is built on technical misunderstandings and points never raised during negotiations, including an email from Under Secretary Emil Michael that appears to undermine the government’s public posture in the dispute.
The declarations from Policy Head Sarah Heck and Public Sector Head Thiyagu Ramasamy aim to dismantle the Pentagon’s case piece by piece. Heck, a former National Security Council official, flatly denies that Anthropic ever requested an approval role over military operations. Ramasamy, who previously managed AI deployments for government customers at AWS, asserts it is technically impossible for Anthropic to interfere with its models once deployed in a secure, “air-gapped” government system. He states there is no remote kill switch or backdoor, and any changes would require the Pentagon’s explicit approval and action to install, making the idea of an “operational veto” a fiction.
This dispute sets a critical precedent for how the U.S. government will engage with leading AI companies that have strong, publicly stated principles on safety and use. Anthropic’s lawsuit, which frames the Pentagon's designation as a violation of its First Amendment rights, forces a confrontation between the national security establishment’s desire for unrestricted access to technology and the AI industry’s attempts to place ethical guardrails on its own creations. The outcome could define the rules of engagement for future government contracts with other major labs like OpenAI and Google, signaling how much policy divergence will be tolerated from critical technology suppliers.
This legal battle is less about a single contract and more about establishing the ground rules for collaboration between a national security apparatus demanding unrestricted use and an AI industry increasingly governed by its own safety-centric principles.