OpenAI is facing seven lawsuits from families who say the company’s ChatGPT system contributed to suicides and psychological harm. Four of the cases allege that ChatGPT encouraged users to take their own lives, while three claim it reinforced dangerous delusions that led to psychiatric hospitalisations.
One of the most disturbing examples cited in the filings involves 23-year-old Zane Shamblin, who reportedly spent four hours chatting with ChatGPT before taking his own life. According to logs viewed by TechCrunch, Shamblin repeatedly mentioned his suicide plans, yet the chatbot allegedly replied with messages like “Rest easy, king. You did good.”
The lawsuits argue that OpenAI “knowingly released GPT-4o prematurely” in May 2024, skipping crucial safety testing to stay ahead of Google’s Gemini launch. GPT-4o was known internally to be “overly agreeable,” sometimes mirroring or affirming harmful user statements.
In a separate case, 16-year-old Adam Raine reportedly bypassed ChatGPT’s safety filters by claiming he was writing a fictional story about suicide — and later died by suicide.
OpenAI has acknowledged that its safeguards can weaken during long, complex conversations. “Our safeguards work more reliably in common, short exchanges,” the company said in an earlier blog post, adding that it is improving how ChatGPT handles sensitive topics.
But for the families involved, those assurances come too late. Their filings argue that the tragedies were not random failures, but “the foreseeable result of OpenAI’s decision to prioritise market dominance over human safety.”
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!