Are AI Models Doomed to always Hallucinate?

Posted by Peter Rudin on 27. October 2023 in News

Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.

This tendency to invent ‘facts’ is a phenomenon known as hallucination, and it happens because of the way today’s LLMs are developed and trained.

LLMs and Generative AI models have no real intelligence, they’re statistical systems that predict words, images, speech, music or other data.

Hallucinations are a problem if generated statements are factually incorrect or violate any general human, social or specific cultural values.

Hence, treating models’ predictions with a sceptical eye seems to be the best approach.


Leave a Reply

Your email address will not be published. Required fields are marked *