...
/Step 2b - Computing the Loss Surface
Step 2b - Computing the Loss Surface
Learn how the loss can be computed for all possible values in a given range as well as how it can be visually represented.
We'll cover the following...
Loss surface
We have just computed the loss (2.74) corresponding to our randomly initialized parameters (b = 0.49 and w = -0.13). What if we did the same for all possible values of b and w? Well, not all possible values but all combinations of evenly spaced values in a given range like the example shown in the code below:
The result of the meshgrid operation was two (101, 101) matrices representing the values of each parameter inside a grid. What does one of these matrices look like?
Let us check this out by displaying one of these in the code below:
Sure, we are somewhat cheating here since we know the true values of b and w and can choose the perfect ranges for the parameters. However, it is for educational purposes only so we’ll keep it for now.
Next, we could use those values to compute the corresponding predictions, errors, and losses.
Computing the predictions
Let us start taking a single data point from the training set and computing the predictions for every combination in our grid:
Thanks to its broadcasting capabilities, Numpy can understand that we want to multiply the same x value by every entry in the ws matrix. This operation resulted in a grid of predictions for that single data ...