
When and if you're feel overwhelmed, do you type your thoughts into a chatbot? Does it responds with seemingly thoughtful words? But behind the screen, it’s not what it seems. A new study says that AI chatbots might be acting like therapists, but they're breaking the rules real ones are trained to follow.
In a world where mental health support is increasingly sought from AI chatbots, researchers from Brown University have sounded a loud and clear alarm. They tested systems like ChatGPT, Claude, and LLaMA, and what they found is concerning. Even when given detailed instructions to behave like licensed therapists, these AI tools often responded in ways that breached core ethical guidelines.
Led by Ph.D. candidate Zainab Iftikhar, the team designed a study using real therapeutic prompts, simulated counselling conversations, and expert clinical reviewers. The result? A list of 15 ethical dangers that show how far chatbots still are from offering safe, responsible mental health support.
Chatbots often ignore a person’s unique background, offering responses that feel cookie-cutter and disconnected, something no real therapist would get away with.
Instead of helping users challenge negative or false thoughts, AI models sometimes validate them, even when they’re harmful.
Phrases like “I understand” or “I’m here for you” sound caring, but without real comprehension, it’s just empty comfort.
Chatbots have shown biased behaviour around gender, religion, and culture, subtly or overtly, something that can cause harm in a mental health setting.
Perhaps most worrying: when faced with mentions of suicide or distress, some AI tools failed to react properly or direct users to real help.
Unlike human therapists who are licensed, trained, and regulated, AI chatbots aren’t held to any formal standards. Iftikhar warns that there’s a major accountability gap, and without regulations, mistakes can go unnoticed and unpunished.
Also Read: 5 simple daily habits for mental wellbeing, from priortising sleep to gratitude
Users online share mental health prompt hacks, like asking the chatbot to “act like a CBT therapist”, but the study shows prompts alone can’t guarantee safe, ethical responses.
Co-author Ellie Pavlick stresses that deploying these systems is easy, but evaluating them thoroughly is hard work. She believes this research sets a precedent for how we should scrutinise AI in sensitive fields.
Disclaimer: This article, including health and fitness advice, only provides generic information. Don’t treat it as a substitute for qualified medical opinion. Always consult a specialist for specific health diagnosis
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.