Search⌘ K
AI Features

Step 4 - Update the Parameters

Explore how to update parameters in gradient descent by combining gradients with a learning rate. Learn why reversing the gradient's sign minimizes loss and see how parameter adjustments improve model predictions step-by-step.

We'll cover the following...

Updating parameters

In the final step, we use the gradients to update the parameters. Since we are trying to minimize our losses, we reverse the sign of the gradient for the update.

There is still another hyperparameter to consider: the learning rate, denoted by the Greek letter eta (that looks like the letter n). This presents the multiplicative factor that we need to apply to the gradient for the parameter update. Our equation now becomes the following:

b=bηMSEbb = b - \eta{\frac{\partial MSE}{\partial b}} ...