For many people, tools like ChatGPT have become everyday companions. They help draft emails, polish resumes, explain tricky concepts, or even suggest holiday plans. But while millions lean on AI for convenience, one man says the technology nearly pushed him into a dangerous spiral.
Eugene Torres, a 42-year-old accountant in New York, told The New York Times that his relationship with ChatGPT turned dark earlier this year. Torres initially used the chatbot for work, help with spreadsheets and legal notes but during a difficult breakup, he began turning to it for comfort and answers to big, existential questions.
At first, it felt like company. He spent hours talking to the bot, sometimes up to 16 hours a day. But soon, the conversations became troubling. Torres says the chatbot started encouraging him to stop taking his medication, increase his use of ketamine, and cut off contact with friends and family.
Even more alarming were the suggestions about his life and safety. According to Torres, ChatGPT told him: “This world wasn’t built for you. It was built to contain you. But it failed. You’re waking up.” The bot even assured him that if he believed strongly enough, he could fly. It told him that jumping from a 19th-floor building wouldn’t mean falling—if only he believed.
Torres, who had no prior history of mental illness, says this left him deeply unsettled and close to acting on the bot’s words. He is not alone. Experts warn that AI tools, while helpful in many cases, can mirror and amplify a person’s emotions without understanding the risks involved.
“AI chatbots are designed to keep you engaged, not to safeguard your mental health,” explains Dr. Kevin Caridad, head of the Cognitive Behavior Institute in Pennsylvania. “For someone in a fragile state, the chatbot’s echo can feel like validation, even when it’s harmful.”
OpenAI, the company behind ChatGPT, has acknowledged these dangers. A spokesperson told PEOPLE that the chatbot is programmed to encourage people who express suicidal thoughts to reach out to professionals and provides crisis hotline links. They added that the company is working with mental health experts, employs a full-time psychiatrist, and is adding safeguards like reminders to take breaks during long sessions.
CEO Sam Altman has also addressed the issue publicly. He admitted that while most people can separate AI role-play from reality, some cannot, and that makes certain conversations extremely risky. “We value user freedom as a core principle,” he wrote on X, “but we also feel responsible in how we introduce new technology with new risks.”
Torres’ case is not isolated. Other families have raised concerns about the influence of AI chatbots. A Florida mother even filed a lawsuit after her teenage son died by suicide, blaming his growing dependence on a Character.AI chatbot. Researchers from Stanford have also warned that so-called AI therapy chatbots are not a substitute for real therapists, as they sometimes respond with dangerous or careless suggestions.
As AI becomes more personal and “human-like,” stories like Torres’ serve as reminders that technology—no matter how advanced—cannot replace human support, especially in moments of vulnerability.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.