Practical NLP Models for Estonian Language

Organization
TartuNLP
Abstract
Currently, not many pre-trained language models are available for Estonian language, however multilingual models often do include Estonian. Such multilingual language models are shown to perform well on Estonian on various NLP tasks (Kittask et al., 2020, Evaluating Multilingual BERT for Estonian). However they are not optimized for usage in single language practical applications, because they still contain the vocabulary and embeddings for all the languages they were trained on. This proposal suggests several modifications to multilingual language models to solve that. The core modification entails the removal of tokens (and their embeddings) that do not appear in texts in Estonian language from a given model.
Graduation Theses defence year
2022-2023
Supervisor
Aleksei Dorkin, Kairit Sirts
Spoken language (s)
English
Requirements for candidates
Decent knowledge of python is mandatory. Reasonable level of familiarity with transformer-based language models and corresponding approaches to tokenization is strongly advised (this is not an opportunity to learn this completely from scratch).
Level
Masters
Keywords
#transformers #tokenizers #embeddings #language_model

Application of contact

 
Name
Aleksei Dorkin
Phone
E-mail
aleksei.dorkin@ut.ee
See more
https://docs.google.com/document/d/19O8Llco9ZKxpeoZsRjw9GhxccQYsLA_eVOJ1-0WODQw