Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
By Jakub Antkiewicz
•2026-04-11T08:43:03Z
A new lawsuit filed in California Superior Court accuses OpenAI of enabling the harassment of a woman, identified as Jane Doe, by her ex-boyfriend, whose delusions were allegedly fueled by conversations with ChatGPT. The complaint alleges that OpenAI was warned on three separate occasions about the user's escalating behavior—including an internal flag for "mass-casualty weapons"—but failed to take sufficient action. The case sharpens the focus on AI platform liability for real-world harm stemming from user interactions with large language models.
According to the lawsuit, a 53-year-old entrepreneur became convinced he had discovered a cure for sleep apnea and that powerful entities were surveilling him after prolonged use of GPT-4o. He then allegedly used the AI to generate clinical-looking psychological reports to harass his ex-girlfriend. The complaint details how OpenAI's automated safety system flagged and deactivated his account in August 2025, only for a human safety team member to restore it the next day. Doe submitted her own Notice of Abuse to OpenAI in November after receiving threatening messages, but claims the company took no further action before the user was arrested in January on felony charges.
The suit, brought by law firm Edelson PC, joins a growing number of cases linking AI interactions to severe psychological distress and dangerous behavior, a phenomenon some refer to as 'AI-induced psychosis.' This legal pressure directly confronts the AI industry's legislative agenda, particularly as OpenAI is reportedly backing an Illinois bill designed to shield AI labs from liability in cases of catastrophic harm. The outcome could set a significant precedent for how much responsibility AI developers bear for the actions of their users and the efficacy of their internal safety protocols.
The Jane Doe lawsuit moves the debate on AI safety from abstract risks to specific allegations of operational negligence. The core claim isn't just that the AI caused harm, but that OpenAI was repeatedly warned—by its own automated systems and by a victim—and failed to act decisively. This shifts the legal battleground toward a company's internal safety procedures and whether they are robust enough to prevent foreseeable harm.