It all started as a curiosity for Arve Hjalmar Holmen, a father of Norway: QHe was asking Chatgpt who he was, to know what he knew about his story. From that moment the chaos began.
The events occurred in August 2024, when Holmen consulted Chatgpt and the chatbot He replied that he was a Norwegian individual known for a tragic eventstating that he had killed two of his children and tried to kill a third party, being sentenced to 21 years in prison. Of all these data, Chatgpt only gave two correct: the number and sex of their children, as well as the name of their hometown, were correct.
The information turned out to be completely false. Which is strange, since in general, an AI obtains its information through algorithms or internet searches. According to the chatgpt, whom we have asked about the case, this is your answer:
There is no evidence that there is a website that accuses Arve Hjalmar Holmen of a crime. The false information generated by ChatGPT seems to be the result of a “hallucination” of the model, where it combines real data with invented information without a specific source that supports it.
These “hallucinations” occur because language models such as Chatgpt generates responses based on patterns learned from large amounts of textual data. However, they do not always distinguish between verified and not verified information, which can lead to the creation of incorrect or misleading content.
In the case of Holmen, Chatgpt mixed precise details, such as the number and gender of their children and their hometown, with completely false factslike the accusations of murder. This highlights the need to improve the precision and reliability of the responses generated by artificial intelligence systems, especially when it comes to sensitive personal information.
Because of this, Different European privacy rights groups have denounced the invasion and invention to the authorities. One of these groups, Nyob has filed a complaint against Openai and showed the screenshot of the answer to the question of Norwegian man to OpenAi.
Noyb said that Incorrect data could continue to be part of the data set of the large linguistic model (LLM) And that the Norwegian has no way of knowing if the false information about it has been permanently eliminated, since Chatgpt introduces user data into their system for training purposes.
“Some think there is no smoke without fire – they point A statement -. The fact that someone can read this result and believe that it is true is what scares me the most. Add a non -compliance warning of the law does not invalidate it. The AI companies cannot simply ‘hide’ false information to users while they continue to process false information internally. IA companies should stop acting as if the GDPR was not applicable when it is clearly. If hallucinations do not stop, people’s reputation can be easily damaged. ”