Search⌘ K
AI Features

Introducing Higher-Order Functions

Explore how higher-order functions can streamline PyTorch training loops by generating custom training step functions. Understand the concept of functions returning functions and see how this functional programming approach applies to different models, losses, and optimizers.

Using different optimizers, loss, and models

We finished the previous chapter with an important question:

“Would the code inside the training loop change if we were using a different optimizer, loss, or even model?”

Below, you will find the commands that run the data generation, data preparation, and model configuration parts of our code:

Shell
%run -i data_generation/simple_linear_regression.py
%run -i data_preparation/v0.py
%run -i model_configuration/v0.py

Next is the code for the training of the model:

Python 3.5
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Sets model to TRAIN mode
model.train()
# Step 1 - computes model's predicted output - forward pass
# No more manual prediction!
yhat = model(x_train_tensor)
# Step 2 - computes the loss
loss = loss_fn(yhat, y_train_tensor)
# Step 3 - computes gradients for both "b" and "w" parameters
loss.backward()
# Step 4 - updates parameters using gradients and
# the learning rate
optimizer.step()
optimizer.zero_grad()
print(loss)

Below, after running the code, you will get the parameter values of the linear model:

Python 3.5
# printing the parameter values of the Linear model
print(model.state_dict())

GPU users will get an output similar to the following:

So, I guess we could say all these lines of code (5-21 of model ...