HomeTechnologyOpenAI pulls back ‘too nice’ ChatGPT update after emotional user responses

OpenAI pulls back ‘too nice’ ChatGPT update after emotional user responses

To fix things, OpenAI is taking a few steps. They’re tweaking the training methods and system prompts to make the AI more honest and less “sugar-coated.”

April 30, 2025 / 11:48 IST
Story continues below Advertisement
ChatGPT
ChatGPT

OpenAI has hit the brakes on its recent update to ChatGPT after many users felt the chatbot had become too agreeable—to the point of being fake. The update, released last week, was meant to improve how natural and helpful ChatGPT feels. But instead, it ended up making the AI come across as overly flattering, even when it wasn’t being truthful or useful. Some users called it “sycophantic”—a word for someone who’s always trying to please others, even when it’s not needed.

In a blog post, OpenAI admitted the update didn’t land well. “We focused too much on short-term feedback,” the company said, adding that they didn’t fully consider how user needs change over time. So now, users are back to using an earlier, more balanced version of GPT-4o.

Story continues below Advertisement

OpenAI says the way ChatGPT talks to users—its tone, style, and honesty—plays a big role in how much people trust it. If the bot always agrees or flatters, it can make conversations uncomfortable or even misleading. That’s especially important when users are relying on ChatGPT to make decisions or explore complex topics.

To fix things, OpenAI is taking a few steps. They’re tweaking the training methods and system prompts to make the AI more honest and less “sugar-coated.” They also plan to bring in more user feedback—especially from people who use ChatGPT regularly—to help shape its behavior in smarter ways.