Generative AI has gone beyond being the buzzword and is used by millions across the world on a daily basis. The technology, however, still remains a hit and miss when it comes to certain questions or areas.
Former IT minister Rajeev Chandrasekhar, in a post on X, suggested why sometimes ChatGPT or Google Gemini gives bizarre responses. “LLMs "bullshit content" comes from most models being trained on content/datasets that are - to politely use the phrase - NOT quality assured. Thats why you have the embarrassing sight of billion dollar Gemini/ChatGPT many occasions spewing nonsense.”
The former minister further used a programming adage to describe why AI chatbots deliver inaccurate responses. “Garbage in, Garbage out is an old adage in programming especially if ur depending on scraping the internet,” he further said in the post.
AI hallucinations are realThere have been many cases of AI chatbots giving incorrect or distorted responses — a phenomenon known as AI hallucinations. Be it ChatGPT or Google Gemini, they have a tendency of providing inaccurate and irrelevant information. These chatbots generate responses that may be nonsensical, misleading, or even offensive. It depends, as Chandrasekhar, said on how it scrapes the internet. These misinterpretations occur due to factors like overfitting, training data bias, and high model complexity.
It’s still early days and companies like Google, OpenAI, Microsoft are training their models that power generative AI chatbots. All companies do give a disclaimer about the accuracy of responses generated by their chatbots.
Also read: Google rolls out Gemini Nano AI for Pixel 8 and Pixel 8a: How to download
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.