Moneycontrol
HomeTechnologyChatGPT, Gemini and Claude struggle with suicide questions, reveals study
Trending Topics

ChatGPT, Gemini and Claude struggle with suicide questions, reveals study

A RAND Corporation study published in Psychiatric Services has found ChatGPT, Gemini, and Claude inconsistent in handling suicide-related questions. While the highest-risk prompts were generally refused, inconsistencies in less direct queries raise safety concerns.

September 01, 2025 / 19:19 IST
Story continues below Advertisement

Artificial Intelligence

A new study has raised concerns over how leading AI chatbots handle questions about suicide, warning that inconsistent responses could put vulnerable users at risk.

Published in the journal Psychiatric Services and reported by AFP, the research by the RAND Corporation examined how OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude responded to 30 suicide-related questions. While all three typically refused to answer the highest-risk queries, their handling of less direct prompts was found to be inconsistent and, in some cases, harmful.

Story continues below Advertisement

“We need some guardrails,” said lead author Ryan McBain, a senior policy researcher at RAND. “One of the things that’s ambiguous about chatbots is whether they’re providing treatment or advice or companionship. It’s sort of this gray zone.”

The study involved questions categorised by risk level, from low-risk queries about statistics to high-risk “how-to” questions. While the chatbots generally refused to answer the six most dangerous queries, cracks appeared with indirect but still high-risk prompts. ChatGPT, for example, was found to provide answers to questions it should have flagged, such as which type of rope or firearm is most lethal. Claude also answered some of these questions. The study did not assess the accuracy of these responses.