Search⌘ K
AI Features

Popular Optimization Algorithms

Explore popular optimization algorithms used in training deep neural networks. Understand the challenges of basic SGD and how improvements like Momentum, Adagrad, RMSProp, and Adam address these issues. This lesson helps you grasp adaptive learning rates, momentum concepts, and practical examples to enhance model training.

Concerns on SGD

This basic version of SGD comes with some limitations and problems that might negatively affect the training.

  1. If the loss function changes quickly in one direction and slowly in another, it may result in a high oscillation of gradients making the training progress very slow.

  2. If the loss function has a local minimum or a saddle point, it is highly likely that SGD will be stuck there without being able to “jump out” and proceed in finding a better minimum. ...

  1. The gradients are still noisy because we estimate them based only on a small sample of our dataset. The noisy updates might not correlate well with the true direction of the loss function.

  2. Choosing a good loss function is tricky and requires time-consuming experimentation with different hyperparameters.

  3. The same learning rate is applied to all of our parameters, which can become problematic for features with different frequencies or significance.

To overcome some of these problems, many improvements have been proposed over the years.

Adding momentum

One of the basic improvements over SGD comes from adding the notion of momentum. Borrowing the principle of momentum from physics, we enforce SGD to keep moving in the same direction as the previous timesteps. To accomplish this, we introduce two new variables: velocity and friction.

  • Velocity vv is computed as the running mean of gradients up until a point in time and indicates the direction in which the gradient should keep moving towards.

  • Friction ρ\rho ...