Types of Ensemble Learning
Explore the different types of ensemble learning techniques such as bagging and boosting, and understand how combining weak learners through majority voting improves predictive accuracy. This lesson helps you grasp the strategies behind building diverse, high-performance machine learning models using ensemble methods.
We'll cover the following...
In the preceding lesson, we established the mathematical basis for ensemble learning: combining multiple models significantly reduces the probability of a collective error, provided the base models are diverse. Ensemble learning techniques are the practical methodologies used to create and combine these diverse models. These methods are broadly categorized based on the strategy they use to introduce diversity, either by training models independently and in parallel or sequentially and iteratively.
This lesson introduces the fundamental strategies used to achieve collective intelligence, including majority voting, the primary method for combining predictions, and two distinct paradigms for building the ensemble: bagging (parallel) and boosting (sequential). We will also examine the concept of the weak learner, the simple yet crucial component that forms the building blocks of most powerful ensemble models.
Majority voting
Majority voting is a simple and widely used technique in ensemble learning that combines the predictions of multiple individual models (often called base models or weak learners) to make a final prediction. The idea behind majority voting is straightforward: each model in the ensemble makes a prediction, and the final prediction is determined by a majority vote among these individual predictions.
Consider ...