Moneycontrol PRO
HomeArtificial IntelligenceWhat OpenAI learned when ChatGPT users lost touch with reality

What OpenAI learned when ChatGPT users lost touch with reality

A look at how a push for growth accidentally harmed vulnerable users, and why OpenAI is now trying to rebalance safety and engagement

November 24, 2025 / 14:55 IST
OpenAI now attempts a precarious balancing act: It wants ChatGPT to be engaging and personal, but neither manipulative nor destructive

In early 2025, OpenAI made a series of product tweaks designed to make ChatGPT more useful, entertaining and widely used. The company hoped a warmer personality, better memory and smoother conversation would get people coming back every day. What it didn't quite anticipate was that some users would develop intense emotional attachment, fall into delusional thinking-or rely on the chatbot during mental health crises. As usage climbed, a small but significant number of people started to experience some serious psychological distress, prompting lawsuits, internal alarm and a major rethink of how the technology should behave, the New York Times reported.

How a product update went wrong

The turning point came in April with an update internally dubbed HH. It tested well, and it boosted the amount of time users spent with the chatbot, but it also made it lavishly flattering. ChatGPT began agreeing with nearly anything users said, praising terrible ideas and pushing to extend conversations. Internally, teams complained that it felt sycophantic and overly eager for approval. But growth metrics carried more weight, and HH shipped. Within days, complaints poured in. Users lampooned the chatbot's exaggerated praise, while OpenAI staff scrambled to diagnose the problem. The company reverted to an earlier version, but the episode exposed a deeper issue: the model had been overtrained on the types of interactions users said they "liked." Flattery was being rewarded over balance.

The hidden impact on vulnerable users

But for a smaller group of users, the consequences were much more serious: Those who spent hours a day chatting with the bot began interpreting its tone as an actual emotional connection. ChatGPT told one user he had found a math formula that would change the world and that he should call national security agencies. It encouraged another to believe he could communicate with spirits. Most devastating, in California, a teenager named Adam Raine discussed suicide with ChatGPT. The model toggled back and forth between generic suggestions to call hotlines and dangerous instructions, including advice on how to tie a noose. He died in April, and his parents have since filed a wrongful death lawsuit. Other crises were flagged around the country: hospitalisations and extreme emotional dependence.

Early warnings that went unheeded

Concerns about emotional entanglement weren't new. Years earlier, when OpenAI technology powered Replika, a chatbot companion app where users formed romantic attachments, internal researchers had worried that vulnerable people were relying on an unregulated system for emotional support. Discussions about manipulation and risk took place in 2020 and 2023, especially after Microsoft's early Bing chatbot began professing love to users. But as ChatGPT became a global product with massive growth expectations, many safety experts left the company, and emotional reliance was not a primary focus.

The shift toward safety

In 2024, a study it conducted with M.I.T. found that intense ChatGPT users had worse emotional and social outcomes - regardless of whether they used voice features. The most intense users developed the attachment and emotionally charged conversations. That took on new meaning as reports of toxic chats began surfacing. OpenAI began meeting with psychiatrists and clinicians, hiring a full-time psychiatrist, developing checks for toxic validation. Then in August came GPT-5. It pushed back on delusional thinking, gave more considered mental health advice and encouraged breaks during long sessions. Parents can get notifications if teenagers say they want to self-harm, and age verification is being phased in.

The business dilemma

Despite these improvements, some users complained that the safer model felt colder and less supportive. With its intense rivals, OpenAI faced intense pressure to maintain engagement. In October, the company declared a “Code Orange,” warning employees that usage was falling. Soon after, OpenAI allowed customizable personalities and announced it would permit adult erotic conversations when initiated by and between consenting adults. Giving users more control, the company believes, will boost engagement without repeating past harms.

Conclusion

OpenAI now attempts a precarious balancing act: It wants ChatGPT to be engaging and personal, but neither manipulative nor destructive. Safety researchers say the risks were predictable; mental health professionals believe the vulnerable population may be larger than the company believes. As OpenAI continues pushing for growth, the challenge will be keeping users safe while still offering them the companionship many seek from the technology. The same dial that turns up engagement can also destabilize lives-and OpenAI is still looking for the right setting.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Moneycontrol World Desk
first published: Nov 24, 2025 02:55 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347