Search⌘ K

Update the Gradient

Explore how to update the gradient in logistic regression by deriving the partial derivative of the log loss function and applying gradient descent. Understand how replacing the linear model with a sigmoid function transforms predictions into binary classifications. Gain insights on the model's shape changes and stability for effective training.

Updating the loss function

Now that we have a brand-new loss function, let’s look up its gradient. Here is the partial derivative of the log loss with respect to the weight from the math textbooks:

δLδw=1mi=1mxi(yi^yi)\frac{\delta L}{\delta w} = \frac{1}{m} \sum_{i=1}^{m} x_i (\hat{y_i} - y_i) ...