...
/Introducing Higher-Order Functions
Introducing Higher-Order Functions
Learn about higher-order functions and how they can help in reducing boilerplate code.
We'll cover the following...
We'll cover the following...
Using different optimizers, loss, and models
We finished the previous chapter with an important question:
“Would the code inside the training loop change if we were using a different optimizer, loss, or even model?”
Below, you will find the commands that run the data generation, data preparation, and model configuration parts of our code:
Press + to interact
%run -i data_generation/simple_linear_regression.py%run -i data_preparation/v0.py%run -i model_configuration/v0.py
Next is the code for the training of the model:
Press + to interact
Python 3.5
# Defines number of epochsn_epochs = 1000for epoch in range(n_epochs):# Sets model to TRAIN modemodel.train()# Step 1 - computes model's predicted output - forward pass# No more manual prediction!yhat = model(x_train_tensor)# Step 2 - computes the lossloss = loss_fn(yhat, y_train_tensor)# Step 3 - computes gradients for both "b" and "w" parametersloss.backward()# Step 4 - updates parameters using gradients and# the learning rateoptimizer.step()optimizer.zero_grad()print(loss)
Below, after running the code, you will get the parameter values of the linear model:
Press + to interact
Python 3.5
# printing the parameter values of the Linear modelprint(model.state_dict())
GPU users will get an output similar to the following:
So, I guess we could say all these lines of code (5-21 of model training) ...