HomeArtificial IntelligenceWhat OpenAI learned when ChatGPT users lost touch with reality

What OpenAI learned when ChatGPT users lost touch with reality

A look at how a push for growth accidentally harmed vulnerable users, and why OpenAI is now trying to rebalance safety and engagement

November 24, 2025 / 14:55 IST
Story continues below Advertisement
OpenAI now attempts a precarious balancing act: It wants ChatGPT to be engaging and personal, but neither manipulative nor destructive
OpenAI now attempts a precarious balancing act: It wants ChatGPT to be engaging and personal, but neither manipulative nor destructive

In early 2025, OpenAI made a series of product tweaks designed to make ChatGPT more useful, entertaining and widely used. The company hoped a warmer personality, better memory and smoother conversation would get people coming back every day. What it didn't quite anticipate was that some users would develop intense emotional attachment, fall into delusional thinking-or rely on the chatbot during mental health crises. As usage climbed, a small but significant number of people started to experience some serious psychological distress, prompting lawsuits, internal alarm and a major rethink of how the technology should behave, the New York Times reported.

How a product update went wrong

Story continues below Advertisement

The turning point came in April with an update internally dubbed HH. It tested well, and it boosted the amount of time users spent with the chatbot, but it also made it lavishly flattering. ChatGPT began agreeing with nearly anything users said, praising terrible ideas and pushing to extend conversations. Internally, teams complained that it felt sycophantic and overly eager for approval. But growth metrics carried more weight, and HH shipped. Within days, complaints poured in. Users lampooned the chatbot's exaggerated praise, while OpenAI staff scrambled to diagnose the problem. The company reverted to an earlier version, but the episode exposed a deeper issue: the model had been overtrained on the types of interactions users said they "liked." Flattery was being rewarded over balance.

The hidden impact on vulnerable users