Search⌘ K

Decoder Object

Understand how to transform the encoder's final LSTM states into an AttentionWrapperState suitable for decoding. Learn to build a BasicDecoder using decoder cells and samplers, apply projection layers for logits, and implement these steps in TensorFlow Seq2Seq models.

Chapter Goals:

  • Convert the encoder’s final state into the proper format for decoding with attention
  • Create a BasicDecoder object to use for decoding

A. Creating the initial state

The final state from the encoder is a tuple containing an LSTMStateTuple object for each layer of the BiLSTM. However, if we want to use this as the initial state for an attention-wrapped decoder, we need to convert it into an AttentionWrapperState.

For the conversion we ned to use get_initial_state with required ...