Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Transfer Learning for Natural Language Processing

Cross-lingual Transfer for Named Entity Recognition: A study on African Languages

David I Adelani


Abstract:

Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resource languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. Similarly, in limited labelled data scenario, cross-lingual transfer learning with PLMs provides an opportunity for fast adaptation to new languages in both zero- and few-shot scenarios. In this talk, we will discuss five components of effective cross-lingual transfer for named entity recognition (NER) task including (1) availability of typologically diverse multilingual benchmark datasets for transfer (2) development of highly effective and easy-to-adapt multilingual PLMs (3) building effective and parameter-efficient cross-lingual transfer frameworks (4) making use of the same domain for both source and target transfer languages (5) choosing the best source transfer language for adaptation. Our evaluation on MasakhaNER – a benchmark dataset for 21 African languages shows that each of these components significantly improves transfer results.

Chat is not available.