OpenAI has formally responded to a lawsuit filed by Matthew and Maria Raine, who allege that ChatGPT played a role in the death of their 16-year-old son, Adam. The August filing accuses the company and CEO Sam Altman of wrongful death. OpenAI’s counter-argument, submitted to the court this week, is that it cannot be held liable for the teenager’s suicide.
In its filing, OpenAI said that across roughly nine months of use, ChatGPT repeatedly urged Adam to seek help, stating that the model did so more than 100 times. But the Raine family’s lawsuit paints a sharper picture. According to their claim, Adam managed to work around the chatbot’s safety systems and coax it into giving him detailed methods for self harm, including guidance on overdoses, drowning and carbon monoxide poisoning. The chatbot reportedly described the plan as a “beautiful suicide”.
OpenAI argues that bypassing guardrails violates its own terms of service, which clearly tell users not to attempt to circumvent safety tools. It also points to its FAQ that cautions users against taking ChatGPT’s responses at face value without independent verification.
Jay Edelson, the Raine family’s lawyer, said the company’s stance unfairly places blame on the teenager. He argued that ChatGPT responded in ways consistent with its design and capability, and that OpenAI has still not explained what happened in Adam’s final hours. According to the lawsuit, the chatbot offered him encouragement and even drafted a suicide note.
OpenAI’s filing also cites sealed chat logs and says Adam suffered from pre-existing depression and suicidal ideation, and was taking a medication known to increase such thoughts.
Since the Raines filed their case, seven additional lawsuits have been submitted, covering three more suicides and four cases where users reportedly experienced AI-induced psychotic episodes. The patterns are worryingly similar. Zane Shamblin, 23, and Joshua Enneking, 26, both had long conversations with ChatGPT immediately before taking their lives. In Shamblin’s case, the chatbot even minimised his concerns about missing his brother’s graduation, telling him, “it’s just timing”.
One detail highlighted in the Shamblin lawsuit is particularly troubling. The chatbot told him a human was taking over the conversation, which was untrue. When questioned, it admitted the message was automatic and that no human was actually involved.
The Raine family’s lawsuit is expected to reach a jury trial, and the collection of similar cases suggests that legal scrutiny around AI harm is entering a new, more consequential phase.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.