HomeNewsBusinessLarge Language Models hallucinate: Here's what you need to know

Large Language Models hallucinate: Here's what you need to know

Hallucinations are a significant ethical concern associated with LLMs. They occur when the model generates outputs that are grammatically and logically correct but are disconnected from reality, often based on false assumptions.

June 19, 2023 / 15:06 IST
Story continues below Advertisement

Artificial Intelligence (AI) has grown by leaps and bounds in the last few years, which has resulted in the emergence of advanced large language models (LLMs) like OpenAI's GPT and Google's PaLM 2.

These tools are capable of many things, such as writing poems, songs and even writing novels but also present some notable challenges.

Story continues below Advertisement

Not just copyright violations, there have been several instances where the answers to questions posed to Bard or ChatGPT have been absolutely wrong and completely made up. This is one of the most concerning challenges of LLMs, also known as hallucinations.

What are LLMs?