Moneycontrol PRO
Black Friday Sale
Black Friday Sale
HomeNewsOpinionLooking for emotional support? ChatGPT’s secret weapon is artificial emotional intelligence

Looking for emotional support? ChatGPT’s secret weapon is artificial emotional intelligence

There is good reason to think chatbots work better as an emotional companion than as a provider of facts. These tools are exceptionally good at mimicking empathy through learning from text scraped from the web, including emotive reactions posted on social media platforms

April 26, 2023 / 10:47 IST
For the time being, AI is at least more reliable for their emotional skills than their grasp of facts.

Earlier this year, Princeton Computer Science Professor Arvind Narayanan set up a voice interface to ChatGPT for his nearly four-year-old daughter. It was partly an experiment and partly because he believed AI agents would one day be a big part of her life.

Narayanan’s daughter was naturally curious, often asking about animals, plants and the human body, and he thought ChatGPT could give useful answers to her questions, he told me. To his surprise, the chatbot developed by OpenAI also did an impeccable job at showing empathy, once he told the system it was speaking to a small child.

“What happens when the lights turn out?” his daughter asked.

“When the lights turn out, it gets dark, and it can be a little scary,” ChatGPT responded through an synthetic voice. “But don’t worry! There are lots of things you can do to feel safe and comfortable in the dark.”

It then gave some advice on using nightlights, closing with a reminder that “it’s normal to feel a bit scared in the dark.” Narayanan’s daughter was visibly reassured by the explanation, he wrote in a Substack post.

Microsoft Corp and Alphabet Inc’s Google are rushing to enhance their search engines with the large language model technology that underpins ChatGPT — but there is good reason to think the technology works better as an emotional companion than as a provider of facts.

That might sound weird, but what’s weirder is that Google’s Bard and Microsoft’s Bing, which is based on ChatGPT’s underlying technology, are being positioned as search tools when they have an embarrassing history of factual errors: Bard gave incorrect information about the James Webb Telescope in its very first demo while Bing goofed on a series of financial figures in its own.

The cost of factual mistakes is high when a chatbot is used for search. But when it’s designed as a companion, it’s much lower, according to Eugenia Kuyda, founder of the AI companion app Replika, which has been downloaded more than 5 million times. “It won’t ruin the experience, unlike with search where small mistakes can break the trust in the product,” Kuyda added.

Margaret Mitchell, a former Google AI researcher who co-wrote a paper on the risks of large language models, has said large language models are simply “not fit for purpose” as search engines.

Language models make mistakes because the data they’re trained on often includes errors and because the models have no ground truth on which to verify what they say. Their designers may also prioritise fluency over accuracy.

That is one reason why these tools are exceptionally good at mimicking empathy. After all, they’re learning from text scraped from the web, including the emotive reactions posted on social media platforms like Twitter and Facebook, and from the personal support shown to users of forums like Reddit and Quora. Conversations from movie and TV show scripts, dialogue from novels, and research papers on emotional intelligence all go into the training pot to make these tools appear empathetic.

No surprise then that some people are using ChatGPT as a kind of robo-therapist, according to an April feature in Bloomberg Businessweek. One person said they used it to avoid becoming a burden on others, including their own human therapist.

To see if I could measure ChatGPT empathic abilities, I put it through an online emotional intelligence test, giving it 40 multiple choice questions and telling it to answer each question with a corresponding letter. The result: It aced the quiz, getting perfect scores in the categories of social awareness, relationship management and self-management, and only stumbling slightly in self awareness.

In fact, ChatGPT performed better on the quiz than I did, and it also beat my colleague, global banking columnist Paul Davies, even though we are both human and have real emotions (or so we think).

There’s something unreal about a machine providing us comfort with synthetic empathy, but it does make sense. Our innate need for social connection and our brain’s ability to mirror others’ feelings mean we can get a sense of understanding even if the opposite party doesn’t genuinely “feel” what we feel. Inside our brains, so-called mirror neurons activate when we perceive empathy from others — including chatbots — helping us feel a sense of connection.

Empathy, of course, is a multifaceted concept, and for us to truly experience it, we do arguably need another warm body in the room sharing our feelings for a moment in time.

Thomas Ward, a clinical psychologist with Kings College London who has researched software’s role in therapy, cautions against assumptions that AI can adequately fill a void for people who need mental health support, particularly if their issues are serious. A chatbot for instance, probably won’t acknowledge that a person’s feelings are too complex to understand. ChatGPT, in other words, rarely says “I don’t know,” because it was designed to err on the side of confidence rather than caution with its answers.

And more generally, people should be wary of habitually turning to chatbots as outlets for their feelings. “Subtle aspects of the human connection like the touch of a hand or knowing when to speak and when to listen, could be lost in a world that sees AI chatbots as a solution for human loneliness,” Ward says.

That might end up creating more problems than we think we’re solving. But for the time being, they’re at least more reliable for their emotional skills than their grasp of facts.

Parmy Olson is a Bloomberg Opinion columnist covering technology. Views are personal and do not represent the stand of this publication.Credit: Bloomberg 
Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Parmy Olson is a Bloomberg Opinion columnist covering technology. Views are personal, and do not represent the stand of this publication.
first published: Apr 26, 2023 10:47 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347