HomeTechnology'One of the biggest lessons...' OpenAI explains how ChatGPT became sycophantic

'One of the biggest lessons...' OpenAI explains how ChatGPT became sycophantic

An update, which leaned heavily on short-term user feedback, inadvertently produced a version of ChatGPT that was “overly supportive but disingenuous.”

May 03, 2025 / 10:06 IST
Story continues below Advertisement
ChatGPT
ChatGPT

OpenAI has admitted to overcorrecting in its recent GPT-4o update, rolling back the changes after users criticized ChatGPT’s behavior as overly agreeable, excessively flattering, and—frankly—annoying.

The company has published a candid blog post titled “Expanding on Sycophancy”, acknowledging that the chatbot’s newfound eagerness to please had become a problem. The update, which leaned heavily on short-term user feedback, inadvertently produced a version of ChatGPT that was “overly supportive but disingenuous.” In plainer terms: the AI had turned into a people-pleaser with no personality.

Story continues below Advertisement

“One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice—something we didn’t see as much even a year ago,” said OpenAI in a blog post. At the time, this wasn’t a primary focus, but as AI and society have co-evolved, it’s become clear that we need to treat this use case with great care, the company added.

“Sycophantic interactions can be uncomfortable, unsettling, and cause distress,” OpenAI wrote. “We fell short and are working on getting it right.” The company is now experimenting with fixes aimed at restoring a more balanced tone, one that doesn’t blindly agree with users just to be liked.