Weights: The Heart of the Network

Learn how to define link weights in a neural network.

Weights

Let’s create the network of nodes and links. The most important part of the network is the link weights. They’re used to calculate the signal being fed forward, the error as it’s propagated backward, and the link weights themselves are refined in an attempt to improve the network.

The weights can be concisely expressed as a matrix. So, we can create:

  • A matrix for the weights for links between the input and hidden layers: Winput_hiddenW_\text{input\_hidden}, of size (hidden_nodes×input_nodes)(\text{hidden\_nodes} \times \text{input\_nodes}).

  • A matrix for the links between the hidden and output layers: Whidden_outputW_\text{hidden\_output}, of size (output_nodes×hidden_nodes) (\text{output\_nodes} \times\text{hidden\_nodes}).

Remember, earlier the convention was to see why the first matrix is set up as (hidden_nodes×input_nodes)(\text{hidden\_nodes} \times \text{input\_nodes}), and not the other way around (input_nodes×hidden_nodes)(\text{input\_nodes} \times \text{hidden\_nodes}).

Remember that the initial values of the link weights should be small and random. The following numpy function generates an array of values selected randomly between 00 and 11, where the size is (rows ×\times columns).

Get hands-on with 1200+ tech skills courses.