Search⌘ K
AI Features

Update Weights

Explore how to update neural network link weights by using the backpropagated error to guide adjustments. Understand why direct calculation is impractical due to complex mathematics, and why brute-force weight tuning is inefficient. This lesson clarifies the weight update process essential for optimizing neural networks effectively.

Error-controlled weights update

We have not yet discussed the central question of updating the link weights in a neural network. We’ve been working toward this point, and we’re almost there. We have just one more key idea to cover before we unlock this secret.

So far, we’ve propagated the errors back to each layer of the network. Why did we do this? Because the error is used to guide how we adjust the link weights to improve the overall answer given by the neural network. This is basically what we were doing with the linear classifier at the start of this course.

But these nodes aren’t simple linear classifiers. These slightly more sophisticated nodes sum the weighted signals into the node and apply the sigmoid threshold function. So, how do we actually update the weights for links that connect these more sophisticated nodes? ...