Batch Normalization and Dropout
Explore how batch normalization normalizes feature channels to accelerate training and stabilize gradients, and how dropout randomly drops units to prevent overfitting. Learn their advantages, disadvantages, and implementation within convolutional neural networks to enhance model robustness.
We'll cover the following...
Batch normalization
If you open any introductory machine learning textbook, you will find the idea of input scaling. It is undesirable to train a model with gradient descent with non-normalized input features.
Let’s start with an intuitive example to understand why we want normalization inside any model.
Suppose you have an input feature in the range [0,10000] and another feature in the range [0,1]. Any linear combination would ignore : , since our weights are initialized in a very tiny range like [-1,1].
We encounter the same issues inside the layers of deep neural networks. In this lesson, we will propagate this idea inside the NN.
If we think out of the box, any intermediate layer is conceptually the same as the input layer; it accepts features and transforms them.
Notations
Throughout this lesson, will be the batch size, will refer to the height, to the width, and to the feature channels. The greek letter μ() refers to mean and the greek letter σ() refers to standard deviation.
The batch features are denoted by with a shape of [N, C, H, W].
...