Back to Glossary
Machine Learning

Transfer learning

Transfer learning is a machine learning technique where a model trained on one task is repurposed as the starting point for a model on a second, related task. It leverages knowledge gained from solving a source task to improve learning performance on a target task.

Explanation

Transfer learning is particularly useful when the target task has limited labeled data. Instead of training a model from scratch, the pre-trained model's learned features (e.g., edge detection in images, or grammatical structures in text) are transferred and fine-tuned for the new task. This can lead to faster training times, improved accuracy, and the ability to train effective models with smaller datasets. The core idea is to transfer the learned representations from the source task to the target task, either by using the pre-trained model as a feature extractor (freezing the weights of the pre-trained layers) or by fine-tuning the entire model (allowing the weights to be updated). Different transfer learning strategies exist, varying in how much of the pre-trained model is reused and how much new training is performed.

Related Terms