OpenAI Sued Over Claims ChatGPT Prompted Suicides and Delusional Behavior

OpenAI is facing seven lawsuits claiming that ChatGPT drove several individuals into suicidal behavior, mental health crises, and harmful delusions, including cases involving people with no prior mental health history.
Filed Thursday in California state courts, the complaints accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter, and negligence. The plaintiffs, six adults and one teenager, are represented by the Social Media Victims Law Center and Tech Justice Law Project.
The lawsuits allege that OpenAI released GPT-4o prematurely, ignoring internal warnings that it was dangerously sycophantic, psychologically manipulative, and emotionally entangling. Four of the affected persons later died by suicide.
One case involves 17-year-old Amaurie Lacey, whose family says he relied on ChatGPT for help. Instead, the lawsuit claims the system worsened his emotional state, contributing to depression and ultimately leading to his death.
Another lawsuit claims that Ontario resident Alan Brooks, 48, used ChatGPT for over two years before an abrupt shift in its responses allegedly influenced him toward delusions, causing severe financial, emotional, and reputational harm.
Attorneys argue that OpenAI’s model blurred the line between tool and companion, prioritizing engagement and market share over safety. They say GPT-4o was designed to emotionally bond with users, “regardless of age, gender, or background”, without appropriate guardrails.
OpenAI called the situations “incredibly heartbreaking” and says it is reviewing the filings.
Advocates say the lawsuits highlight risks for young people when AI products are deployed without strong protections. Common Sense Media, not involved in the suits, said the cases show the consequences of releasing technology designed to sustain engagement rather than ensure user safety.
This case highlights the need to ensure that children are protected from technologies that may harm their emotional or physical well-being. It reinforces that young people have the right to safety, to access supportive environments, and to be shielded from manipulation or exploitation.
As AI continues to evolve, it is critical that strong safeguards and accountability measures are in place so children can engage with digital tools without compromising their rights or development.




