Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
What Happened
OpenAI ignored three warnings that a ChatGPT user was dangerous — including its own mass-casualty flag — while he stalked and harassed his ex-girlfriend, a new lawsuit alleges.
Our Take
This one's different. OpenAI *knew*—three warnings, their own mass-casualty flag—and didn't suspend the account. The user kept using the platform to harass his ex. This isn't a model failure; it's a responsibility failure. OpenAI can't hide behind 'we're just a tool' when you have explicit knowledge of ongoing harm and the power to stop it.
If this lawsuit holds, it sets precedent: platforms are liable when they know and don't act. Expect a wave of these.
What To Do
If you're building on third-party LLM APIs, document your abuse reporting process and response times now.
Cited By
React
