Optax: Advanced Features
While the last lesson reviewed some common loss functions and optimizers, Optax has much more to offer. More than we can reasonably cover in this lesson, actually, so we’ll restrict ourselves to just a handful of functionalities here.
Not content with the default setting of a constant learning rate, the deep learning community has been experimenting with variable learning rates. Optax offers more than a dozen versions of this technique, which is known as learning-rate scheduling. Let’s review a few:
- Exponential decay
- Cosine decay
- Combining (multiple, existing) schedules
- Injecting hyperparameters
This scheduling scheme follows an exponential distribution.
We can switch between a continuous or discrete sampling by setting up the
staircase attribute to
In 2017, Ilya Loshchilov & Frank Hutter proposed Stochastic Gradient Descent with warm Restarts, SGDR. It uses a cosine decay scheduling, which can be represented as:
We can define this as
cosine_decay_schedule() with the parameters:
We can even combine two schedules using
It’s a common practice to update the learning rate during training. Often we’ll want to update the other hyperparameters as well during the training. We can do this easily by using