OpenAI CEO Sam Altman, in a candid admission during the first episode of OpenAI’s newly launched podcast, said that people place a surprisingly “high degree of trust” in ChatGPT—despite the well-known fact that the AI frequently hallucinates or generates factually incorrect content.
“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates,” Altman stated during the discussion. “It should be the tech that you don't trust that much.”
Altman’s remarks came amid growing public reliance on artificial intelligence tools, especially large language models (LLMs) such as ChatGPT, for tasks ranging from professional research and customer service to parenting advice and creative writing. The CEO himself admitted to using ChatGPT extensively for parenting guidance during his son’s early months, noting both its remarkable utility and its inherent risks.
The comment underscored what Altman described as a paradox at the heart of AI’s mainstream adoption: while hallucination—a known flaw in LLMs—remains a critical issue, users continue to trust and rely heavily on AI systems because of their conversational fluency, contextual memory, speed, and ease of use.
“ChatGPT is super useful. But it hallucinates. That’s just the reality of current AI,” Altman explained, reiterating the importance of critical thinking when engaging with AI tools. He warned against the blind acceptance of outputs from such systems, particularly in high-stakes sectors such as healthcare, legal services, and education.
Despite the widespread trust, Altman emphasised that ChatGPT and similar tools should not be treated as infallible sources of truth. “It’s not super reliable,” he added.
The podcast episode also delved into broader concerns around data privacy, transparency, and monetisation of AI tools. Altman spoke of OpenAI’s efforts to maintain user trust as the company explores newer features like persistent memory and considers potential advertising-driven revenue models.
“We have to preserve user trust,” he said, stressing that transparency and robust privacy measures would be critical in the next phase of AI evolution. His comments come amid ongoing scrutiny around data practices, particularly in light of OpenAI’s legal battle with The New York Times over alleged unauthorised use of copyrighted material in training datasets.
The lawsuit has brought into focus the tension between rapid AI innovation and compliance with legal and ethical standards.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
