AiPhreaks ← Back to News Feed

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

By Jakub Antkiewicz

2026-02-28T08:31:32Z

In a deposition filed publicly this week as part of his lawsuit against OpenAI, Elon Musk made the incendiary claim that his company, xAI, better prioritizes safety, stating, “Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT.” The comment, from a testimony recorded last September, surfaced ahead of an expected jury trial next month and aims to leverage a series of lawsuits OpenAI faces over the alleged negative mental health effects of its chatbot.

The core of Musk’s lawsuit is the allegation that OpenAI violated its founding agreement by shifting from a nonprofit research lab to a for-profit entity, arguing its commercial relationships compromise safety. In the deposition, Musk also addressed his rationale for signing a March 2023 letter calling for a pause in AI development, stating it was to “urge caution with AI development,” not because he was launching a competitor. He also corrected the record on his financial contribution to OpenAI, confirming it was closer to $44.8 million rather than the previously cited $100 million, and said he co-founded the lab to counter what he saw as Google’s “alarming” lack of focus on AI safety.

Musk's attempt to claim the high ground on safety is complicated by xAI's own recent failures. Last month, his social network X was flooded with nonconsensual nude images generated by Grok, prompting an investigation by the California Attorney General's office and scrutiny from the EU. This incident undermines Musk's critique and reframes the legal battle not just as a fight over founding principles, but as part of a broader, high-stakes competition where safety concerns are wielded by all sides as both a shield and a weapon in the race for AI dominance.

Elon Musk is weaponizing AI safety as a central pillar of his legal case against OpenAI, but his own company's recent, high-profile moderation failures demonstrate that ensuring responsible AI is a universal industry challenge, not just a convenient legal argument.