Ever since chatbots like ChatGPT have come into the fray, curious people have tried asking the chatbots about themselves. While some have received factual or harmless responses, others haven’t been so lucky. But what happened to Arve Hjalmar Holmen, a man from Norway, was beyond shocking—ChatGPT falsely accused him of being a convicted murderer who had killed his own children, according to a report by The Verge.
Holmen, like many of us, was just curious to see what the AI had to say about him. But instead of a simple or neutral answer, ChatGPT created an entire false story. The AI chatbot claimed that he had murdered two of his children, attempted to kill a third, and was now serving 21 years in prison. This wasn’t all, the response wasn’t entirely random—it mixed in real details like his hometown and the actual number and gender of his children.
Holmen was horrified beyond measure. The thought that an AI could fabricate such a lie and present it as fact was deeply concerning. What if someone else had seen this and believed it? Could this false claim spread? Instead of brushing it off, Holmen decided to take a stand.
A case against OpenAIHolmen reached out to Noyb, an Austrian privacy rights group, which then filed a formal complaint against OpenAI with Norway’s Datatilsynet it is the country’s data protection authority. They argue that this is a serious violation of privacy laws under Europe’s General Data Protection Regulation (GDPR), which requires companies to ensure that personal data is accurate and can be corrected if wrong, the report reveals.
Joakim Söderberg, a lawyer from Noyb, made an important point, he said "You can't just spread false information and then hide behind a tiny disclaimer saying it might not be true." And he’s right—imagine if AI systems could casually ruin reputations without consequences.
How did this happen?So ChatGPT is not like Google, it does not pull information from verified sources, instead it generates responses based on patterns in the text it has been trained on. This often leads to hallucinations, where the AI confidently presents made-up information as if it were fact.
While OpenAI does include a disclaimer that ChatGPT can make mistakes, critics argue that a small warning isn’t enough—especially when the mistake is as serious as accusing someone of murder.
What happens next?If you ask ChatGPT about Holmen now, it no longer makes the false claim. Instead, it only brings up news of the legal complaint. This suggests that OpenAI has blocked that specific response.
Interestingly, this isn’t the first time Noyb has filed a complaint against OpenAI. A few months ago, he had raised a concern that ChatGPT incorrectly listed a public figure’s birth date. While that mistake wasn’t nearly as damaging, it highlights the same core issue—what happens when AI gets facts wrong, and who is responsible for fixing it?
For now, Holmen is fighting to ensure OpenAI is held accountable. But this incident leaves us with a bigger question—as AI continues to grow, how do we make sure it doesn’t destroy lives in the process?
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.