Autograd
Explore how PyTorch's autograd simplifies automatic differentiation by computing gradients of tensors involved in model training. Learn to use the backward method to calculate gradients from loss and zero_ to reset them, enabling effective parameter updates in your regression models.
Introduction to autograd
Autograd is PyTorch’s automatic differentiation package. Thanks to it, we do not need to worry about partial derivatives, chain rules, or anything like it.
The backward method
So, how do we tell PyTorch to do its thing and compute all gradients? That is the role of the backward() method. It will compute gradients for all (requiring gradient) tensors involved in the computation of a given variable.
Do you remember the starting point for computing the gradients? It was the loss, as we computed its partial derivatives w.r.t. our parameters. Hence, we need to invoke the backward() method from the corresponding Python variable: loss.backward().
...The following refer to the steps (old and new) of the process occurring in the code above:
- New “step 3 -