Back to Glossary
Machine Learning

Model ensemble

Model ensembling is a technique in machine learning where predictions from multiple individual models are combined to make a final prediction. The goal is to improve the overall accuracy and robustness compared to relying on a single model.

Explanation

Model ensembling leverages the diversity of different models to reduce variance and bias. This often involves training several independent models on the same dataset, possibly with different architectures, initializations, or hyperparameters, or training on different subsets of the data (e.g., bagging). The predictions from these models are then aggregated, typically through methods like averaging (for regression tasks) or voting (for classification tasks). More sophisticated ensemble methods, such as stacking, involve training a meta-learner to combine the predictions of the base models. Ensembling works because different models may capture different aspects of the underlying data distribution or make different errors. By combining their predictions, the strengths of each model can be emphasized while mitigating their individual weaknesses. This often leads to better generalization performance, especially when the base models are diverse and make uncorrelated errors.

Related Terms