
The newest version of ChatGPT is starting to pull information from an unexpected place, and it is making researchers uneasy.
Tests show that the latest model, GPT-5.2, has begun citing Grokipedia, an AI-written online encyclopedia linked to Elon Musk, as a source for a range of factual questions. These include sensitive topics such as Iranian political institutions and even the background of people connected to Holocaust denial cases.
In trials carried out by The Guardian, ChatGPT referenced Grokipedia nine times while answering just over a dozen questions. Some of these were about how Iran’s political system works, including details about the Basij paramilitary force and the ownership of powerful foundations. Others focused on Sir Richard Evans, a British historian who testified as an expert witness against Holocaust denier David Irving in a high-profile libel case.
Grokipedia was launched in October as a rival to Wikipedia, but it works very differently. Unlike Wikipedia, it cannot be edited directly by humans. An AI model writes the entries and responds to requests for changes. The project has already been criticised for pushing rightwing views on topics such as same-sex marriage and the 6 January attack on the US Capitol.
What is striking is how Grokipedia appears in ChatGPT’s answers. When the chatbot was directly asked about well-known misinformation, such as claims about media bias against Donald Trump or false narratives around HIV and Aids, it did not cite Grokipedia. Instead, the encyclopedia showed up when ChatGPT was asked about more obscure or technical subjects.
In one example, ChatGPT repeated stronger claims about links between the Iranian government and telecom company MTN-Irancell than those found on Wikipedia. It also cited Grokipedia when repeating details about Sir Richard Evans that have previously been debunked.
Grokipedia is not only showing up in ChatGPT. There are also reports that Anthropic’s Claude chatbot has referenced the same source on topics ranging from oil production to Scottish beer.
An OpenAI spokesperson said ChatGPT aims to draw from a wide range of publicly available sources and uses safety filters to reduce harmful content, while clearly showing where information comes from.
Disinformation experts remain worried. They warn that once questionable information slips into AI systems, it can quietly spread and become hard to remove. When people see a source cited by an AI chatbot, they may assume it has been vetted and is trustworthy.
That, researchers say, is the real risk. AI tools like ChatGPT are becoming everyday sources of knowledge. If they start treating low-credibility sources as reliable, even in subtle ways, misinformation can gain a powerful new boost.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.