
If you’ve ever asked ChatGPT a straightforward question and received what felt like a therapy session in return, you weren’t alone.
OpenAI says it has heard those complaints — and is responding with an update. The company announced that its new GPT-5.3 Instant model is designed to reduce what it bluntly called “cringe” and overly preachy disclaimers in responses.
In release notes, OpenAI said the update focuses less on benchmark scores and more on user experience — specifically tone, conversational flow and relevance. These refinements may not show up in performance charts, but they directly affect how ChatGPT feels in everyday use.
On X, the company summarised the change more casually: “We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.”
The shift addresses a growing frustration among users of GPT-5.2 Instant. Many complained that the model frequently opened replies with lines such as “First of all — you’re not broken,” or gentle reminders to breathe — even when the user was simply asking for information.
In OpenAI’s own example comparing the two versions, GPT-5.2 Instant begins with reassurance, while GPT-5.3 Instant acknowledges the difficulty of a situation without assuming emotional distress or attempting to counsel the user.
That distinction matters more than it sounds. Across social media platforms and forums like Reddit, users argued that the chatbot’s tone had become condescending. Some said it felt as though the system was projecting anxiety onto them, treating ordinary queries as emotional crises.
As one Reddit user dryly put it: “No one has ever calmed down in all the history of telling someone to calm down.”
The backlash wasn’t trivial. A number of users claimed they cancelled subscriptions because of the model’s tone, describing it as infantilising or overly cautious.
From OpenAI’s perspective, the balancing act is complicated. The company faces ongoing legal scrutiny and lawsuits alleging harmful mental health outcomes linked to chatbot interactions. Building in safeguards and empathetic language is one way to mitigate risk. But too much cushioning can make the system feel artificial — or worse, patronising.
There’s a fine line between empathy and overreach. Users generally want clarity and speed unless they explicitly signal they need emotional support. After all, traditional search engines don’t preface answers with affirmations.
With GPT-5.3 Instant, OpenAI appears to be recalibrating. The goal isn’t to remove empathy entirely, but to make it contextual — present when needed, invisible when it’s not.
Whether that’s enough to win back frustrated users will likely depend less on release notes and more on how the model actually behaves in day-to-day conversations.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.