Ridge and Lasso Regression
Learn about ridge and lasso regression, their comparison, and the importance of their contours’ intersection with MSE.
We'll cover the following...
In the previous lesson, we learned that regularization helps control the bias–variance trade-off and prevents overfitting by adding a penalty on large weights. Now we will look at the two most widely used regularization methods:
-
Ridge (L2 regularization)
-
Lasso (L1 regularization)
Both techniques shrink the model’s weights, but the type of penalty they use leads to very different results. Understanding this difference is essential because the Lasso can eliminate features entirely, while the Ridge cannot, and this comes purely from geometry.
Ridge and Lasso objectives
Both Ridge and Lasso regression are special forms of regularized linear regression. They use the simplest model type (linear model) and the standard way to measure error (squared loss), differing only in their regularization penalty.
The core model and loss function
Before introducing the penalty, we must define the model that makes a prediction and the loss function that measures the error.
Linear model ()
A linear model assumes the output (, the prediction) is a simple, weighted sum of the inputs (). The goal is to find the best set of weights () that connect the inputs to the output.
- We have training examples, . Each input has features.
- The model expression:
- is the intercept (or bias).
- to are the slopes or feature weights.
To simplify the math, we often combine with the other weights by adding a constant to the start of the feature vector: ...