Search⌘ K
AI Features

Subgradient Descent—Lasso Regression

Explore subgradient descent and its application to Lasso regression, a convex but nondifferentiable optimization problem. Understand how subgradients enable gradient-based methods on L1-regularized objectives. Learn to implement this technique with NumPy, analyze its behavior, and visualize optimization paths for feature selection and model prediction.

Gradient descent works well on objective functions that are convex as well as differentiable at all points. However, how does it work in cases where the objective function is not differentiable at one or more points? Let’s understand this with the example of lasso regression.

Lasso regression

Lasso regression or L1L_1-regularized linear regression is a type of linear regression problem where model coefficients are constrained to near zero using the L1L_1 penalty. Consider the scenario where we want to predict the price of a house based on its size, location, number of rooms, and other features. Using lasso regression, we can identify the most important features affecting the price and discard the irrelevant ones.

Assume XRN×dX \in \R^{N \times d} denotes the set of dd-dimensional input features (size, location, number of rooms, etc.) for NN houses and their corresponding true labels (prices) YRNY \in \R^N ...