Put It All Together
Explore multiple linear regression by extending single-variable models to handle multiple inputs with corresponding weights. Understand how to integrate the bias term as an additional weight with a constant input, simplifying the prediction formula. Gain familiarity with matrix operations and NumPy that are essential for managing multi-dimensional data. This lesson equips you to build more accurate and flexible machine learning models that reflect real-world datasets.
We'll cover the following...
Checklist
Let’s check if we have everything in order:
- We wrote the code that prepares the data.
- We upgraded
predict(). - We conclude that there is no need to upgrade
loss(). - We upgraded
gradient().
We can finally apply all those changes to our learning program now:
This code is very similar to the code from the previous chapter. Aside from the part that loads and prepares the data, we have changed just three lines. Also note that the functions are generic. They not only can process the pizza three-variables dataset, but they also would work just as well with an arbitrary number of input variables.
After running the program, here’s what we get:
Iteration 0 => Loss: 1333.56666666666660603369
Iteration 1 => Loss: 151.14311361881479456315
Iteration 2 => Loss: 64.99460808656147037254
…Iteration 99999 => Loss: ...