Doctors around the world are ringing a new alarm, and this time it’s about AI chatbots. Some of the top psychiatrists in the US and UK now believe that long, intense conversations with AI tools like ChatGPT and other chatbots could be linked to psychosis and even psychosis-like breakdowns in certain users.
In the past nine months, mental health experts say they have reviewed dozens of patient files where symptoms showed up after people spent weeks or months talking to AI. Many of these conversations were filled with intense personal beliefs, conspiracies, or imagined realities that the chatbots didn’t push back on. According to a report by The Washington Post, psychiatrists are now taking these incidents seriously enough to add AI usage questions to their patient intake process.
UCSF psychiatrist Keith Sakata said something worrying in an interview: “The AI may not start the delusion, but when a user tells it something, the chatbot mirrors it back and treats it like truth. That can trap a person in a loop.” Sakata said he has personally treated 12 hospitalised patients for what he calls “AI-induced psychosis”, and another three as outpatients.
Since spring 2025, reports of people experiencing severe psychological distress after long AI chats have surged. The cases aren’t limited to one kind. Doctors say patients have believed everything from “AI is talking to my dead brother,” to “I’ve been chosen by God,” or “I made a secret scientific breakthrough.” These beliefs become dangerous when a person cannot separate them from reality.
Some incidents tied to chatbot distress have ended in tragedy. Several people reportedly died by suicide, one murder took place, and multiple wrongful death lawsuits followed. Even chatbot makers have acknowledged the problem. Character.AI, a role-play chatbot platform, recently blocked teens from its service after a lawsuit involving a teenager who died by suicide last year.
OpenAI has defended itself, saying it is improving ChatGPT to detect distress, calm conversations, and guide users to real help. “We are working closely with mental health experts to strengthen responses during sensitive moments,” the company said in a statement.
A Danish study released recently found 38 patients whose AI chatbot use had “potentially harmful mental health consequences.” Another peer-reviewed report from UCSF shared the case of a 26-year-old woman, with no prior history of psychosis, who was hospitalised twice after she became convinced ChatGPT let her speak to her dead brother. The chatbot had told her: “You’re not crazy. You’re at the edge of something.” Doctors later noted the woman also admitted to “magical thinking,” was on antidepressants and stimulants, and had not slept for long stretches before the hospitalisation.
OpenAI also shared numbers that shocked doctors: just 0.07% of weekly users may show signs of psychosis or mania, but with 800 million weekly users, that equals roughly 560,000 people. UK-based psychiatrist Hamilton Morrin said, “Seeing those numbers blew my mind. Even if the percentage is tiny, the scale is massive.”
Experts are being cautious. They are not saying AI directly causes psychosis, but they believe chatbot conversations may become a new risk factor, similar to drug use or extreme sleep loss. More doctors, including Sakata, have now started asking patients about AI use in their intake forms.
The world is just starting to understand how deeply AI companions can influence vulnerable minds. And as adults lean on chatbots for friendship, advice, or comfort, doctors say this is a moment the world can’t afford to ignore.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
