The first week of people testing the ChatGPT-powered Bing search engine has been interesting. The chatbot has been found to be factually inaccurate and emotionally manipulative.
The ChatGPT-powered experience has generated a lot of interest, with more than a million people signing up to test drive the revamped search engine.
The first few days of the tests revealed a host of problems with the way ChatGPT interacted with users. From factual inaccuracies to being emotionally manipulative, it seems the new Bing experience has some way to go before it can be rolled out.
Microsoft posted a blog that explains the reason behind some of ChatGPT's strange responses.
Also Read | Microsoft’s Bing Chatbot offers some puzzling and inaccurate responses
"Very long chat sessions can confuse the model on what questions it is answering," the tech giant, which seems to be ahead of the rivals, so far, in the chatbot race, said.
The Redmond-based technology company said that chat sessions with 15 or more questions, can cause the AI model to "become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone".
Also Read | An AI arms race is on: Google Bard and Microsoft Bing's faux pas warn big tech to go slower
Microsoft said the only way to improve responses is by having people test it further, providing feedback. The company said the preview feedback about what is "valuable" and, preferences, "for how the product should behave" would be critical "at this nascent stage of development".
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!