Search⌘ K
AI Features

Set Up a Learning Rate in the Training Classifier

Understand how to apply a learning rate to control updates during neural network training. This lesson helps you moderate changes to the model, preventing overfitting to single examples and smoothing out errors from noisy data, resulting in more stable and effective training outcomes.

We'll cover the following...

Fixing the problem

Unfortunately, we can’t completely rely on the very last training example. This is an important idea in machine learning. To improve the results, we moderate the updates. This means that instead of jumping enthusiastically to each new AA, we take a fraction of the change ΔAΔA. That way, we move in the direction that the training example suggests, but we do so cautiously and keep some of the previous value, which we potentially arrived at through many training iterations. We saw this idea of moderating our refinements before with the simpler miles to kilometers predictor, ...