HomeNewsOpinionArtificial? Yes. Intelligent? Maybe: The great AI chatbot race

Artificial? Yes. Intelligent? Maybe: The great AI chatbot race

Microsoft’s Bing and Google’s Bard will certainly make mistakes, and publishers will not be happy

February 09, 2023 / 11:43 IST
Story continues below Advertisement
The new chat-engine wars are underway, with Microsoft announcing its long-awaited integration of OpenAI’s ChatGPT bot into Bing while Google published a blog post about its own chatbot for search, called Bard.
The new chat-engine wars are underway, with Microsoft announcing its long-awaited integration of OpenAI’s ChatGPT bot into Bing while Google published a blog post about its own chatbot for search, called Bard.

Here’s something you don’t see every day: Microsoft Corp. is serving up a snazzy web search tool. And Google, whose search page has barely changed in 24 years, is also racing to launch a just-as-cool revamped tool in the next few weeks. It seems that officially, the new chat-engine wars are underway, with Microsoft on Tuesday announcing its long-awaited integration of OpenAI’s ChatGPT bot into Bing and calling it a “copilot for the web.” Google published a blog post hours earlier about its own chatbot for search, called Bard. For Google in particular, it could be the riskiest strategic move it has made in years, a metaphorical leap off the couch that the company has been relaxing on for far too long.
This scramble by two typically slow-moving tech giants — whose endgame represents nothing less than owning the next era of online search — will be messy and fraught with risk. Both companies are using AI systems that have been trained on billions of words on the public internet, but which can also give incorrect and even biased information. Google also risks provoking a backlash from the web publishers that are critical to its business.

Update: Google is already grappling with chatbot accuracy problems after it emerged that an example used in one of Bard’s promotional examples was wrong. The company's shares declined 8 percent following the news.

Story continues below Advertisement

ChatGPT prompted a wave of admiration for its creative responses to human prompts when it launched last year, but there has since been growing concern about its grasp of facts. We don’t have statistics about how often ChatGPT gives incorrect information because OpenAI doesn’t provide those figures. It only says the tool is getting better through regular updates. But the errors are frequent enough — occurring between 5 percent and 10 percent of the time I’ve used it — to make users increasingly wary about all of its answers.

And despite strict filters that stop the bot from making political statements or hate speech, users of the popular forum Reddit have figured out how to goad ChatGPT into making expletive-laden tirades against its creators using social engineering tricks. The tool has also, inexplicably, used pro-Russian rhetoric when answering questions about the killing of civilians in Ukraine.