Feedforward Neural Networks
Explore how feedforward neural networks, also known as multi-layer perceptrons, solve non-linear problems such as the XOR classification task. Learn about the network structure, weight learning during training, and the feedforward process to understand how these networks generate outputs that match actual data.
We'll cover the following...
Multi-Layer Perceptrons
We covered perceptrons in the last lesson. You saw how they cannot be used for the XOR problem. In this chapter, we will look into multi-layer perceptrons and how they can solve non-linear problems. Multi-layer perceptrons are also referred to as Feedforward Networks.
Adding more perceptron layers can improve the Neural Network’s performance for complex classification tasks. These complex tasks can be problems that are not linearly separable and result in the non-linear decision boundary.
The above diagram displays the multi-layer perceptron for solving the XOR problem. Here, we break down the is the symbols.
-
and are the input features, which are fed into the Neural Network.
-
are the sets of weights, which are learned during the training of the Neural Network.
-
are the sets of biases, which are fed into each node in the Hidden and Output Layers.
-
denotes the summation of the product of the values and weights being fed into a node of a network.
-
denotes the activation function. Here, we are using the step function, which is seen below.
...