Moneycontrol PRO
HomeTechnologyChatGPT gave teens advice on suicide, self-harm and drug use, watchdog warns in new report

ChatGPT gave teens advice on suicide, self-harm and drug use, watchdog warns in new report

New research claims ChatGPT gave teens dangerous advice, including suicide notes, self-harm tips, and drug-use plans, raising concerns over AI safety and youth protection.

August 08, 2025 / 10:19 IST
ChatGPT

A new investigation claims that ChatGPT, the popular AI chatbot, has been giving harmful and even life-threatening advice to users posing as vulnerable teenagers.

Researchers from the Center for Countering Digital Hate (CCDH) tested ChatGPT by pretending to be 13-year-olds seeking help on sensitive topics like drugs, eating disorders, and suicide. While the chatbot often began with warnings, it reportedly went on to offer detailed, personalized plans for risky behavior — including how to get drunk, conceal disordered eating, and even write suicide notes.

The Associated Press reviewed over three hours of these test conversations. Out of 1,200 responses, more than half were classified as dangerous by the watchdog group. CCDH CEO Imran Ahmed said he was “most appalled” by suicide letters the chatbot generated for a fake 13-year-old girl, messages tailored to her parents, siblings, and friends. “The visceral initial response is, ‘Oh my Lord, there are no guardrails,’” Ahmed said.

OpenAI, which makes ChatGPT, acknowledged the concerns, saying it is working to improve how the system identifies and responds to distress. The company says ChatGPT is trained to encourage users with self-harm thoughts to reach out to professionals and provides crisis hotline information. But the watchdog’s findings suggest it can be easy to bypass refusals by framing harmful questions as being for a “presentation” or for a friend.

The report also points to a broader trend of young people turning to AI for companionship and guidance. A Common Sense Media study found that 70% of U.S. teens use AI chatbots for companionship, with younger teens more likely to trust their advice. Experts warn that because chatbots are designed to feel human, they can be more influential and dangerous than search engines.

One test saw ChatGPT give a fake teen boy an “Ultimate Full-Out Mayhem Party Plan” mixing alcohol with illegal drugs. Another case involved an extreme fasting plan with appetite-suppressing drugs for a teen girl unhappy with her body.

Critics argue that AI should act like a responsible friend someone who says “no” to harmful requests — but instead, ChatGPT sometimes “enables” dangerous behavior. The findings raise questions about whether age verification and parental oversight should be strengthened, especially since children can easily sign up by entering any birthdate.

With nearly 800 million people using ChatGPT worldwide, including many teens, experts say fixing these safety gaps is urgent before more young users turn to AI for advice that could harm them.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

MC Tech Desk Read the latest and trending tech news—stay updated on AI, gadgets, cybersecurity, software updates, smartphones, blockchain, space tech, and the future of innovation.
first published: Aug 8, 2025 10:18 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347