...

/

Restricted Stateless LSTM Network for Baseline Modeling

Restricted Stateless LSTM Network for Baseline Modeling

Learn to build and analyze a restricted stateless LSTM network as your baseline model for time series analysis.

It’s always advisable to begin with a baseline model. A restricted stateless LSTM network is taken as a baseline. In such a network, every LSTM layer is stateless, and the final layer has a restricted output, that is:

LSTM(..., stateful=False, return_sequences=False)

We’ll now proceed to build the baseline model, outlining each step in the process.

Input layer

The input layer in LSTM expects three-dimensional inputs. The input shape should be:

(batch size, time-steps, features)\textit{(batch size, time-steps, features)}

A stateless LSTM doesn’t require to specify the batch size explicitly. Therefore, the input shape is defined as follows in the code below.

Press + to interact
model = Sequential ()
model.add(Input(shape=(TIMESTEPS , N_FEATURES),
name='input'))

The above code will show Succeeded, which means the input layer has been created.

The input shape can also be provided as an argument to the first LSTM layer defined next. However, this is explicitly defined for clarity.

LSTM layer

As a general princple, two hidden LSTM layers are stacked in the baseline model. The recurrent_activation argument is left as its default sigmoid while the output activation is set to relu. The relu activation came into existence after LSTMs. Therefore, they’re not in the legacy LSTM definitions but can be used on the output.

Press + to interact
model.add(
LSTM(units=16,
activation='relu',
return_sequences=True,
name='lstm_layer_1'))
model.add(
LSTM(units=8,
activation='relu',
return_sequences=False,
name='lstm_layer_2'))

When we run the above code, it shows Succeeded, which means that the layers are created successfully.

The first LSTM layer has return_sequences set to True. This layer, therefore, yields the hidden outputs for every timestep. Consequently, the first layer output is ...