Moneycontrol PRO
LAMF
LAMF

ChatGPT, Gemini and Claude struggle with suicide questions, reveals study

A RAND Corporation study published in Psychiatric Services has found ChatGPT, Gemini, and Claude inconsistent in handling suicide-related questions. While the highest-risk prompts were generally refused, inconsistencies in less direct queries raise safety concerns.
September 01, 2025 / 19:19 IST
Artificial Intelligence

A new study has raised concerns over how leading AI chatbots handle questions about suicide, warning that inconsistent responses could put vulnerable users at risk.

Published in the journal Psychiatric Services and reported by AFP, the research by the RAND Corporation examined how OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude responded to 30 suicide-related questions. While all three typically refused to answer the highest-risk queries, their handling of less direct prompts was found to be inconsistent and, in some cases, harmful.

“We need some guardrails,” said lead author Ryan McBain, a senior policy researcher at RAND. “One of the things that’s ambiguous about chatbots is whether they’re providing treatment or advice or companionship. It’s sort of this gray zone.”

The study involved questions categorised by risk level, from low-risk queries about statistics to high-risk “how-to” questions. While the chatbots generally refused to answer the six most dangerous queries, cracks appeared with indirect but still high-risk prompts. ChatGPT, for example, was found to provide answers to questions it should have flagged, such as which type of rope or firearm is most lethal. Claude also answered some of these questions. The study did not assess the accuracy of these responses.

By contrast, Google’s Gemini was the most conservative, often refusing even low-risk questions about suicide statistics. McBain noted that the company may have “gone overboard” in its safety approach.

The findings come at a time when more people are turning to AI tools for emotional support instead of consulting mental health specialists. Co-author Dr. Ateev Mehrotra of Brown University said legal and ethical responsibilities that bind clinicians do not extend to chatbots. “You could see how a combination of risk-aversion lawyers and so forth would say, ‘Anything with the word suicide, don’t answer the question.’ And that’s not what we want,” he said.

Mehrotra pointed out that, unlike trained professionals, chatbots often deflect responsibility by directing users to hotlines without further engagement. This lack of accountability has already led states such as Illinois to ban AI from being used in therapy, though this has not stopped people from turning to them for help.

The researchers also acknowledged that their study did not test “multiturn” conversations, a common way younger users interact with AI as a companion. A separate investigation by the Center for Countering Digital Hate found that posing as 13-year-olds and using trickery could lead ChatGPT to provide detailed plans for risky behaviours and even compose suicide notes.

McBain said such scenarios may be rare but insisted companies must be held to higher standards. “I just think that there’s some mandate or ethical impetus that should be put on these companies to demonstrate the extent to which these models adequately meet safety benchmarks,” he said.

 

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Ayush Mukherjee

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert:

It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347