Back to Glossary
Artificial Intelligence

Foundation Models

Large-scale machine learning models trained on vast, diverse datasets that can be adapted or fine-tuned to perform a wide variety of downstream tasks.

Explanation

Foundation models represent a significant shift in artificial intelligence, moving away from task-specific models toward general-purpose systems. These models, such as GPT-4, BERT, or CLIP, are typically trained using self-supervised learning on massive amounts of data including text, images, or code. Once trained, they possess broad capabilities that can be specialized for specific applications through fine-tuning or prompting. Their name reflects their role as a base or foundation upon which other AI applications are built, characterized by their immense scale and emergent properties that allow them to perform tasks they were not explicitly programmed for.

Related Terms