Back to Glossary
Artificial Intelligence

Hallucination

A phenomenon where a large language model (LLM) generates text that is factually incorrect, nonsensical, or detached from reality while appearing confident and coherent.

Explanation

Hallucinations occur because AI models are probabilistic engines designed to predict the next most likely token in a sequence based on training data, rather than accessing a database of facts. They can be caused by insufficient training data, overfitting, or the model's inherent objective to satisfy a prompt even when it lacks the necessary information. There are two main types: intrinsic hallucinations, which contradict the source material, and extrinsic hallucinations, which add information not present in the source that cannot be verified. Reducing hallucinations is a major area of research, involving techniques like Retrieval-Augmented Generation (RAG) and fine-tuning.

Related Terms