Weights: The Heart of the Network
Learn how to define link weights in a neural network.
We'll cover the following
Weights
Let’s create the network of nodes and links. The most important part of the network is the link weights. They’re used to calculate the signal being fed forward, the error as it’s propagated backward, and the link weights themselves are refined in an attempt to improve the network.
The weights can be concisely expressed as a matrix. So, we can create:

A matrix for the weights for links between the input and hidden layers: $W_\text{input\_hidden}$, of size $(\text{hidden\_nodes} \times \text{input\_nodes})$.

A matrix for the links between the hidden and output layers: $W_\text{hidden\_output}$, of size $(\text{output\_nodes} \times\text{hidden\_nodes})$.
Remember, earlier the convention was to see why the first matrix is set up as $(\text{hidden\_nodes} \times \text{input\_nodes})$, and not the other way around $(\text{input\_nodes} \times \text{hidden\_nodes})$.
Remember that the initial values of the link weights should be small and random. The following numpy
function generates an array of values selected randomly between $0$ and $1$, where the size is (rows
$\times$ columns
).
Get handson with 1400+ tech skills courses.