Back to Glossary
Artificial Intelligence

Hallucinations

Instances where an artificial intelligence model, particularly a large language model, generates output that is factually incorrect, nonsensical, or unrelated to the input prompt while maintaining a confident tone.

Explanation

Hallucinations in AI occur when a model produces information that is not grounded in its training data or external reality. Because large language models (LLMs) operate on probabilistic patterns rather than a true understanding of facts, they may prioritize linguistic coherence over factual accuracy. This phenomenon can be triggered by ambiguous prompts, gaps in training data, or the model's inherent design to predict the next likely word in a sequence. Mitigating hallucinations is a primary focus in AI safety and reliability, often addressed through techniques like Retrieval-Augmented Generation (RAG), human-in-the-loop verification, and prompt engineering.

Related Terms