Generalized Linear Models
Acquire knowledge on generalized linear models for regression and classification through interactive code.
We'll cover the following...
In many real-world problems, the relationship between input features and the target variable is not strictly linear. Linear models often fall short when data contains interactions, curved patterns, or complex class boundaries. Generalized linear models (GLMs) offer a flexible extension, maintaining a linear model structure while allowing for nonlinear transformations of the input features. By mapping the original data into a transformed feature space using basis functions, GLMs enable both regression and classification models to capture richer structures without giving up the simplicity and interpretability of linear methods.
This lesson explores how nonlinearity is introduced through basis functions and demonstrates how generalized linear models behave in both regression and classification settings.
Generalized linear model for regression
A regression model that is linear in parameters , and might not necessarily be linear in the input features , is known as a generalized linear model for regression. This is achieved by mapping the original input features into a higher-dimensional feature space using a non-linear set of basis functions, .
The model is defined as:
Here,
-
Parameters (): This is the vector of coefficients that is learned during training, and the model is linear with respect to these parameters.
-
Basis functions (): This function transforms the original features . For example, if , then could be .
Note: A GLM is linear in the transformed features , but is typically nonlinear in the original input features . This non-linear mapping allows the model to fit complex, curved data patterns using simple linear methods.
Nonlinear transformations in regression
Suppose we are building a model to predict a student’s total marks () based on hours studied () and class attendance (). A simple linear model would look like:
...