Masked Multi-Head Attention
Explore the masked multi-head attention mechanism in transformer decoders used for sequence-to-sequence tasks such as language translation. Understand how masking prevents the model from attending to future tokens during training, enabling it to generate target sequences step-by-step like in testing. Learn about query, key, and value computations, masking implementation, and how attention scores are calculated and combined.
In our English-to-French translation task, say our training dataset looks like the one shown here:
A sample training set
Source sentence | Target sentence |
I am good | Je vais bien |
Good morning | Bonjour |
Thank you very much | Merci beaucoup |
By looking at the preceding dataset, we can understand that we have source and target sentences. We saw how the decoder predicts the target sentence word by word in each time step and that happens only during testing.
During training, since we have the right target sentence, we can just feed the whole target sentence as input to the decoder but with a small modification. We learned that the decoder takes the input
Say we are converting the English sentence 'I am good' to the French sentence 'Je vais bien'. We can just add the