Search⌘ K
AI Features

Step 2a - Compute the Loss

Understand the difference between error and loss in regression problems, and learn how to compute the loss using Mean Squared Error with batch gradient descent. Explore the concepts of batch, stochastic, and mini-batch gradient descent to grasp how loss is calculated and optimized during model training.

Difference between error and loss

There is a subtle but fundamental difference between error and loss.

The error is the difference between the actual value (label) and the predicted value computed for a single data point. So, for a given ith point (from our dataset of N points), its error is:

errori=yi^yierror_i = \hat{y_i} - y_i

The error of the first point in our dataset (i = 0) can be represented like this:

On the other hand, the loss is some sort of aggregation of errors for a set of data points.

It seems rather obvious to compute the loss for all (N) data points, right? Well, yes and no. Although it ...