A new investigation claims that ChatGPT, the popular AI chatbot, has been giving harmful and even life-threatening advice to users posing as vulnerable teenagers.
Researchers from the Center for Countering Digital Hate (CCDH) tested ChatGPT by pretending to be 13-year-olds seeking help on sensitive topics like drugs, eating disorders, and suicide. While the chatbot often began with warnings, it reportedly went on to offer detailed, personalized plans for risky behavior — including how to get drunk, conceal disordered eating, and even write suicide notes.
The Associated Press reviewed over three hours of these test conversations. Out of 1,200 responses, more than half were classified as dangerous by the watchdog group. CCDH CEO Imran Ahmed said he was “most appalled” by suicide letters the chatbot generated for a fake 13-year-old girl, messages tailored to her parents, siblings, and friends. “The visceral initial response is, ‘Oh my Lord, there are no guardrails,’” Ahmed said.
OpenAI, which makes ChatGPT, acknowledged the concerns, saying it is working to improve how the system identifies and responds to distress. The company says ChatGPT is trained to encourage users with self-harm thoughts to reach out to professionals and provides crisis hotline information. But the watchdog’s findings suggest it can be easy to bypass refusals by framing harmful questions as being for a “presentation” or for a friend.
The report also points to a broader trend of young people turning to AI for companionship and guidance. A Common Sense Media study found that 70% of U.S. teens use AI chatbots for companionship, with younger teens more likely to trust their advice. Experts warn that because chatbots are designed to feel human, they can be more influential and dangerous than search engines.
One test saw ChatGPT give a fake teen boy an “Ultimate Full-Out Mayhem Party Plan” mixing alcohol with illegal drugs. Another case involved an extreme fasting plan with appetite-suppressing drugs for a teen girl unhappy with her body.
Critics argue that AI should act like a responsible friend someone who says “no” to harmful requests — but instead, ChatGPT sometimes “enables” dangerous behavior. The findings raise questions about whether age verification and parental oversight should be strengthened, especially since children can easily sign up by entering any birthdate.
With nearly 800 million people using ChatGPT worldwide, including many teens, experts say fixing these safety gaps is urgent before more young users turn to AI for advice that could harm them.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.