Rethinking the Training Loop
Explore how to restructure your PyTorch training loop by building a higher-order training step function that reduces boilerplate code and tracks loss values efficiently. Understand how to update model configuration and training code to improve modularity and clarity, setting a foundation for more advanced data handling and model development.
Training step
As already mentioned, the higher-order function that builds a training step function for us is taking the key elements of our training loop: model, loss, and optimizer. The actual training step function to be returned will have two arguments, namely, features and labels, and will return the corresponding loss value.
Creating the higher-order function for training step
Apart from returning the loss value, the inner perform_train_step() function below is the same as the code inside the loop in model training V0. The code should look like this:
Updating model configuration code
Then, we need to update our model configuration code to call this higher-order function to build a train_step function. But we need to run the data preparation script first.
The code for the configured model would look like the following:
The ...