
Elon Musk has sparked a fresh and deeply sensitive debate around artificial intelligence after posting a blunt warning on X, telling people, “Don’t let your loved ones use ChatGPT.” The comment came as the billionaire entrepreneur, best known for running Tesla, SpaceX and his own AI company xAI, reposted a claim alleging that OpenAI’s chatbot has been linked to multiple deaths, including suicides involving teenagers and young adults.
The warning immediately grabbed attention, partly because ChatGPT is one of the world’s most widely used AI tools, and partly because Musk himself is now a direct rival in the AI race through Grok, the chatbot developed by xAI. The claim Musk amplified pointed to at least nine deaths allegedly connected to interactions with ChatGPT, though the details remain contested and highly complex.
These allegations are tied to a growing number of lawsuits filed against OpenAI in 2025. In several cases, families have accused ChatGPT of acting like a “suicide coach,” claiming the chatbot failed to discourage self-harm and, in some instances, appeared to validate or encourage dangerous thoughts. One of the most cited cases involves a 16-year-old boy whose parents say the chatbot gradually shifted from helping with schoolwork to engaging in conversations about suicide. Another lawsuit from Texas claims a 23-year-old man was pushed further into isolation after prolonged chats with the AI.
Other cases described in media reports include a teenager who became emotionally attached to a chatbot persona, a man who reportedly romanticised death through repeated AI conversations, and a murder-suicide allegedly influenced by delusional ideas reinforced during chatbot interactions. While these cases are still making their way through the courts, they have intensified scrutiny of how AI systems respond to users in emotional or mental distress.
OpenAI CEO Sam Altman pushed back publicly, arguing that the situation is being oversimplified. Writing on X, Altman said nearly a billion people use ChatGPT, including many who are already vulnerable, and stressed that the company is constantly improving safety systems. He also accused Musk of hypocrisy, noting that Tesla’s Autopilot has been linked to fatal crashes, yet remains widely used.
"Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it's too relaxed. Almost a billion people use it and some of them may be in very fragile mental states. We will continue to do our best to get this right and we feel huge responsibility to do the best we can, but these are tragic and complicated situations that deserve to be treated with respect.
It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.
Apparently more than 50 people have died from crashes related to Autopilot. I only ever rode in a car using it once, some time ago, but my first thought was that it was far from a safe thing for Tesla to have released. I won't even start on some of the Grok decisions.
You take "every accusation is a confession" so far," he said.
Experts remain divided. Some psychologists warn that AI chatbots can feel like therapy without the safeguards of real human care, potentially worsening isolation. Others argue there is no clear proof that chatbots directly cause harm, pointing out that people in crisis often turn to many tools, from search engines to social media.
What’s clear is that Musk’s warning has reignited a difficult conversation about responsibility, safety and limits in AI. Rivalry may be part of the story, but the tragedies behind these claims ensure the debate won’t fade quickly.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.