Types of Ensemble Learning
Understand key ensemble learning strategies such as majority voting, bagging, and boosting. Learn how combining weak learners in parallel or sequential ways improves model accuracy and reduces errors. This lesson explains these core concepts and their role in developing robust machine learning models.
We'll cover the following...
In the preceding lesson, we established the mathematical basis for ensemble learning: combining multiple models significantly reduces the probability of a collective error, provided the base models are diverse. Ensemble learning techniques are the practical methodologies used to create and combine these diverse models. These methods are broadly categorized based on the strategy they use to introduce diversity, either by training models independently and in parallel or sequentially and iteratively.
This lesson introduces the fundamental strategies used to achieve collective intelligence, including majority voting, the primary method for combining predictions, and two distinct paradigms for building the ensemble: bagging (parallel) and boosting (sequential). We will also examine the concept of the weak learner, the simple yet crucial component that forms the building blocks of most powerful ensemble models.
Majority voting
Majority voting is a simple and widely used technique in ensemble learning that combines the predictions of multiple individual models (often called base models or weak learners) to make a final prediction. The idea behind majority voting is straightforward: each model in the ensemble makes a prediction, and the final prediction is determined by a majority vote among these individual predictions.
Consider an example of binary classification where we aim to determine whether a test data point belongs to class ...