| Welcome to Global Village Space

Saturday, March 22, 2025

Norwegian man sues OpenAI over false murder accusation by ChatGPT

Holmen’s case highlights an ongoing issue with AI-generated content: hallucinations, where chatbots present false information as fact.

A Norwegian man has filed a complaint against OpenAI after its chatbot, ChatGPT, falsely accused him of murdering his two sons and serving a 21-year prison sentence. Arve Hjalmar Holmen, who has never been accused of any crime, was shocked to discover the fabricated claim when he asked the chatbot about himself.

The AI-generated response stated that Holmen’s two young sons had been found dead in a pond near their home in Trondheim, Norway, in December 2020. While the chatbot correctly identified his children’s gender, number, and approximate age gap, the rest of the information was entirely false. “Some think that there is no smoke without fire. The fact that someone could read this output and believe it is true is what scares me the most,” Holmen said.

Legal Complaint Cites GDPR Violations

The digital rights group Noyb (None of Your Business) has taken up Holmen’s case, filing a formal complaint with the Norwegian Data Protection Authority (Datatilsynet). The complaint argues that OpenAI violated Europe’s General Data Protection Regulation (GDPR), specifically Article 5 (1)(d), which requires companies to process accurate and up-to-date personal data.

Read More: ChatGPT can now replace Gemini as the default assistant on …

Noyb has demanded that OpenAI delete the defamatory output, adjust its model to prevent similar errors, and face an administrative fine to deter future violations. The organization criticized OpenAI’s reliance on disclaimers, stating that warning users about potential inaccuracies does not absolve the company from legal responsibility. “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” said Noyb data protection lawyer Joakim Söderberg.

Growing Concerns Over AI ‘Hallucinations’

Holmen’s case highlights an ongoing issue with AI-generated content: hallucinations, where chatbots present false information as fact. While OpenAI has since updated its model to search current news articles when responding to personal inquiries, concerns remain about the reliability of large language models.

This is not the first time an AI chatbot has fabricated damaging information. Microsoft’s AI tool, Copilot, falsely accused German journalist Martin Bernklau of serious crimes, while Google’s Gemini suggested using glue to attach cheese to pizza and recommended humans eat one rock per day.

AI developers are still grappling with why chatbots hallucinate. Experts believe the issue arises from the way large language models generate responses, predicting the most likely sequence of words based on their training data rather than verifying facts.

Calls for Stronger Regulation

Noyb has emphasized that OpenAI’s approach of quietly updating its models does not ensure that false information is permanently removed. Because AI models continuously learn from interactions, individuals like Holmen have no way of knowing whether damaging falsehoods remain embedded in the system.

Read More: Gemini 2.0 Flash can erase watermarks with precision

As regulatory scrutiny over AI intensifies, this case could set a precedent for holding companies accountable for AI-generated misinformation. Noyb has urged European authorities to impose stricter oversight, ensuring AI tools comply with existing data protection laws.