Back to Glossary
Machine Learning

Ensemble learning

Ensemble learning is a machine learning technique that combines the predictions from multiple individual models to make more accurate and robust predictions than any single model alone. These individual models, often called base learners, can be trained on the same dataset using different algorithms or on different subsets of the data.

Explanation

Ensemble learning leverages the diversity of multiple models to improve overall performance. It works by training several base learners and then aggregating their predictions. Common methods for combining predictions include averaging (for regression tasks) and voting (for classification tasks). More sophisticated techniques, like stacking, use another model to learn how to best combine the predictions of the base learners. Ensembles often outperform single models because they can reduce variance (by averaging out errors) and bias (by exploring different model architectures or data subsets). Key benefits include improved accuracy, robustness to noisy data, and better generalization performance. Popular ensemble methods include Bagging (e.g., Random Forests), Boosting (e.g., AdaBoost, Gradient Boosting Machines), and Stacking.

Related Terms