Introduction to Recurrent Neural Networks (RNNs)

Learn the basics of recurrent neural networks and their variations.

The approach

One common difficulty of the approaches discussed in the previous lesson is that they often require significant manual content creation using expert knowledge. For instance, MLNs require expert knowledge of the game design to define a complex set of predicates and rules and then construct a network structure to properly form relationships between them. Other approaches described in the previous chapter require manual construction of various forms, including plan libraries, network structures, probability tables, and so on. Therefore, some researchers are looking to obviate some of this manual processing by using machine learning to automatically extract information from existing game data. RNNs are used for this purpose.

RNNs are neural networks that are designed for processing a sequence of variables and can handle variable-length sequences, which wouldn't be practical for ordinary neural networks.

What are RNNs?

Recurrent neural networks (RNNs) are neural networks with loops in them to allow the retention of historical information (see the figure below). In this figure, xx is the input sequence of vectors, which is represented as...xt1,xt,xt+1, ⟨. . . x_{t−1}, x_t, x_{t+1}, \dots ⟩. The output sequence yy is represented as yt1,yt,yt+1,⟨\dots y_{t−1}, y_t, y_{t+1}, \dots ⟩. Note here that each value of xix_i and yiy_i itself can be a vector. The primary differences between RNNs, NNs, and CNNs are the links between the neurons in the hidden layer. Each neuron hth_t in the hidden layer takes input from two sources: xtx_t and ht1h_{t−1}. The links from the previous timestep make the subsequent timesteps dependent on the data that was seen previously in the sequence.

Representation of the network

Each rectangular box (in blue) in the network shown in the figure below is a hidden layer at timestep tt, and each holds a number of neurons. The output of a neuron hth_t is a function of the input and the output from the previous neuron. Here, UU, VV, and WW represent weight matrices that are to be learned by training the neural network.

Get hands-on with 1200+ tech skills courses.