M-BERT Generalization
Understand how M-BERT handles cross-lingual generalization in POS tagging by exploring its performance on languages with varying scripts and syntactic orders. Learn how typological and structural similarities between languages influence M-BERT's zero-shot transfer capabilities.
Generalization across scripts
Let's investigate whether the M-BERT model can generalize across languages that are written in different scripts. Let's conduct a small experiment to understand this. Say we are performing a POS (part of speech) tagging task. First, we fine-tune M-BERT for the POS
From the preceding example, we can observe that Urdu and Hindi follow different scripts. Surprisingly, the M-BERT model fine-tuned for the ...