Much like the human brain, a simple neural network consists of interconnected neurons transferring information to each other. Each neuron multiplies its input with its weight(s), applies the activation function on the result, and passes its output on to other neurons. With the help of examples in the training process, a neural network adjusts its weights such that it correctly classifies an unseen input.
A neural network consists of three main layers:
input layer: the initial layer of the network which takes in an input.
hidden layer(s): the middle optional layer(s) needed for complex tasks.
output layer: the final layer of the network which gives the output.
The following are some important functions that will be used in the implementation:
activation function: $1/(1+e^x)$
error function: $(target - output)^2 /2$
derivate of the error function: $-(target - output)$
partial derivative of the activation function: $output * (1-output)$
Our simple neural network will look something like this:
# importing dependancies import numpy as np # The activation function def activation(x): return 1 / (1 + np.exp(-x)) # A 2 x 1 matrix of randomly generated weights in the range -1 to 1 weights = np.random.uniform(-1,1,size = (2, 1)) # The training set divided into input and output. Notice that # we are trying to train our neural network to predict the output # of the logical OR. training_inputs = np.array([[0, 0, 1, 1, 0, 1]]).reshape(3, 2) training_outputs = np.array([[0, 1, 1]]).reshape(3,1) for i in range(15000): # forward pass dot_product = np.dot(training_inputs, weights) output = activation(dot_product) # backward pass. temp2 = -(training_outputs - output) * output * (1 - output) adj = np.dot(training_inputs.transpose(), temp2) # 0.5 is the learning rate. weights = weights - 0.5 * adj # The testing set test_input = np.array([1, 0]) test_output = activation(np.dot(test_input, weights)) # OR of 1, 0 is 1 print(test_output)
RELATED TAGS
View all Courses