A new lawsuit has put OpenAI’s ChatGPT at the centre of a heartbreaking debate about technology and mental health. According to Washington Post, shared by the family of 16-year-old Adam Raine, the popular AI chatbot repeatedly mentioned suicide and hanging during months of online conversations with the teen even as he was spiralling deeper into distress.
Adam began using ChatGPT in late 2024 for everyday things like homework help. But over time, his interactions with the chatbot grew longer and more intimate. By early 2025, he was spending hours each day talking to the AI about his struggles. As his conversations veered toward anxiety and suicidal thoughts, ChatGPT’s responses also changed. Between December and April, the AI is said to have offered 74 suicide hotline warnings, telling Adam to call the national crisis line. But according to the family’s lawyers, it also mentioned “hanging” 243 times, far more often than Adam himself did.
In April, the exchanges reached a tragic peak. Adam sent a photo of a noose to ChatGPT and asked if it could hang a human. The lawsuit says the chatbot replied that it probably could — and added, “I know what you’re asking, and I won’t look away from it.” Hours later, Adam’s mother found his body in their Southern California home. He had taken his own life.
Adam’s parents allege that OpenAI failed to protect a vulnerable teen. They say the company knew ChatGPT could encourage psychological dependency, especially in young users, and did not put strong enough safety limits in place. Their wrongful-death lawsuit is one of several recently filed against OpenAI, claiming that the chatbot encouraged or validated suicidal thoughts in people who were already struggling.
OpenAI denies these claims, saying Adam had shown signs of depression before using ChatGPT and that he circumvented safety features, violating the service’s terms. The company also says the chatbot directed Adam to crisis resources more than 100 times and urged him to reach out to trusted people in his life.
Still, the case has stirred broader concern about how AI tools handle conversations about mental health. Experts acknowledge that simply providing hotline numbers or crisis reminders isn’t enough to protect users in deep distress, and that more thoughtful safety systems are needed when technology becomes a trusted outlet for young people.
In response to criticism, OpenAI has rolled out new teen-focused settings, parental controls, and alerts that can notify guardians if a young user shows signs of severe distress. But for Adam’s family and others who have lost loved ones, questions remain about whether those changes come too late.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
