AiPhreaks ← Back to News Feed

Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says

By Jakub Antkiewicz

2026-03-05T08:41:56Z

Anthropic CEO Dario Amodei has accused rival OpenAI of engaging in “safety theater” and telling “straight up lies” regarding its new contract with the U.S. Department of Defense, according to a staff memo reported by The Information. The comments escalate a public rift between the two leading AI labs over the ethical guardrails for military applications of artificial intelligence. The dispute highlights a fundamental disagreement on how to prevent powerful AI models from being used for potentially harmful purposes like autonomous weaponry or domestic surveillance.

The conflict arose after Anthropic’s own negotiations with the DoD stalled. Anthropic reportedly insisted the contract explicitly prohibit the use of its technology for specific applications it considered dangerous, but the government would only agree to a general clause permitting “any lawful use.” OpenAI subsequently accepted a deal with the same “lawful use” language, stating publicly that this phrasing inherently excludes illegal activities like mass domestic surveillance. In his memo, Amodei countered that OpenAI’s framing was disingenuous, writing, “The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”

This public disagreement draws sharp ethical lines within the AI industry and could influence public perception and talent acquisition for both firms. Critics of OpenAI's position note that what is considered “lawful” can change over time, leaving the door open to future misuse. Amodei's memo suggests public sentiment is leaning in Anthropic's favor, pointing to a reported 295% surge in ChatGPT uninstalls after the deal was announced and a rise in Anthropic's own app store ranking. The incident places pressure on the broader AI ecosystem to define clear policies on military engagement and AI safety protocols.

The conflict between Anthropic and OpenAI highlights a critical schism in AI governance: whether safety guardrails should be based on explicit, hard-coded prohibitions or rely on the shifting definition of 'lawful use,' a distinction that will increasingly shape enterprise trust and regulatory frameworks.