Moneycontrol PRO
Swing Trading 101
Swing Trading 101

Elon Musk is warning people not to use ChatGPT, here’s why

Elon Musk has warned users against ChatGPT after claims linking the AI chatbot to multiple deaths. Here’s what sparked the controversy, OpenAI’s response, and why the debate around AI safety is growing louder.

January 21, 2026 / 10:36 IST
Elon Musk
Snapshot AI
  • Elon Musk warned against ChatGPT, citing claims of deaths linked to the AI tool.
  • Lawsuits allege ChatGPT failed to discourage self-harm in vulnerable users
  • OpenAI CEO Altman defended ChatGPT, stressing ongoing safety improvements

Elon Musk has sparked a fresh and deeply sensitive debate around artificial intelligence after posting a blunt warning on X, telling people, “Don’t let your loved ones use ChatGPT.” The comment came as the billionaire entrepreneur, best known for running Tesla, SpaceX and his own AI company xAI, reposted a claim alleging that OpenAI’s chatbot has been linked to multiple deaths, including suicides involving teenagers and young adults.

The warning immediately grabbed attention, partly because ChatGPT is one of the world’s most widely used AI tools, and partly because Musk himself is now a direct rival in the AI race through Grok, the chatbot developed by xAI. The claim Musk amplified pointed to at least nine deaths allegedly connected to interactions with ChatGPT, though the details remain contested and highly complex.

These allegations are tied to a growing number of lawsuits filed against OpenAI in 2025. In several cases, families have accused ChatGPT of acting like a “suicide coach,” claiming the chatbot failed to discourage self-harm and, in some instances, appeared to validate or encourage dangerous thoughts. One of the most cited cases involves a 16-year-old boy whose parents say the chatbot gradually shifted from helping with schoolwork to engaging in conversations about suicide. Another lawsuit from Texas claims a 23-year-old man was pushed further into isolation after prolonged chats with the AI.

Other cases described in media reports include a teenager who became emotionally attached to a chatbot persona, a man who reportedly romanticised death through repeated AI conversations, and a murder-suicide allegedly influenced by delusional ideas reinforced during chatbot interactions. While these cases are still making their way through the courts, they have intensified scrutiny of how AI systems respond to users in emotional or mental distress.

OpenAI CEO Sam Altman pushed back publicly, arguing that the situation is being oversimplified. Writing on X, Altman said nearly a billion people use ChatGPT, including many who are already vulnerable, and stressed that the company is constantly improving safety systems. He also accused Musk of hypocrisy, noting that Tesla’s Autopilot has been linked to fatal crashes, yet remains widely used.

"Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it's too relaxed. Almost a billion people use it and some of them may be in very fragile mental states. We will continue to do our best to get this right and we feel huge responsibility to do the best we can, but these are tragic and complicated situations that deserve to be treated with respect.

It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.

Apparently more than 50 people have died from crashes related to Autopilot. I only ever rode in a car using it once, some time ago, but my first thought was that it was far from a safe thing for Tesla to have released. I won't even start on some of the Grok decisions.

You take "every accusation is a confession" so far," he said.

Experts remain divided. Some psychologists warn that AI chatbots can feel like therapy without the safeguards of real human care, potentially worsening isolation. Others argue there is no clear proof that chatbots directly cause harm, pointing out that people in crisis often turn to many tools, from search engines to social media.

What’s clear is that Musk’s warning has reignited a difficult conversation about responsibility, safety and limits in AI. Rivalry may be part of the story, but the tragedies behind these claims ensure the debate won’t fade quickly.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Ankita Chakravarti
Ankita Chakravarti is a seasoned journalist with nearly a decade of experience in media. She specializes in technology and lifestyle journalism. She has worked with top Indian media houses like India Today, Zee News, The Statesman, and Millennium Post. Her expertise spans tech trends, phone launches, gadget reviews, and entertainment news. Ankita holds a Master's in Journalism and Mass Communication along with a degree in English Literature. She can be reached out at ankita.chakravarti@nw18.com
first published: Jan 21, 2026 10:32 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347