Kernel Linear Discriminant
Explore how kernel linear discriminant models extend traditional discriminant analysis to nonlinear classification tasks by using kernel functions and feature space mappings. Understand both two-class and multiclass formulations, implement them using kernel ridge regression, and visualize decision boundaries through hands-on coding examples.
In the previous lessons, we explored kernels, the kernel trick, and the Gram matrix.
We saw how kernels enable us to model nonlinear patterns by implicitly mapping data into a higher-dimensional feature space without explicitly computing that space. In this lesson, we extend those ideas to discriminant analysis.
Kernel linear discriminant models build decision boundaries in the feature space induced by a kernel function.
This enables us to construct nonlinear classification boundaries in the original input space while maintaining a linear model in the feature space. We begin with the traditional discriminant function, introduce its generalized linear form using feature mappings, develop the two-class kernel formulation, and then extend the approach to multiclass settings using one-hot targets.
Finally, we implement the method and visualize the resulting decision boundaries.
Discriminant function
Discriminant analysis aims to classify observations into one of classes using predictor variables. Formally, consider a dataset: , where and ...