Aligning contextual vector spaces between different neural systems

Organization
TartuNLP
Abstract
The idea is to see if contextual token embeddings in the decoder in independently created machine translation models (e.g. Google Translate and Neurotõlge) have a similar topology, i.e. whether they can be converted between two systems. Motivation: If you can, you can (1) "read" the text with one MT system, then (2) convert the vectors of this encoder into vectors of another system, and (3) generate the output translation text with another MT system. If you can't convert these vectors, then a good exploratory master's thesis will come out :-) More widely the same exploration can be done for other encoders (BERT, GPT-2, etc.).
Graduation Theses defence year
2022-2023
Supervisor
Mark Fishel
Spoken language (s)
Estonian, English
Requirements for candidates
The skills you need to do this are basic linux, python, some pytorch, knowledge of the machine translation of the transformer/encoder-decoder type (or you can learn this as part of your thesis).
Level
Masters
Keywords
#transformer #transformers #embeddings #alignment

Application of contact

 
Name
Mark Fišel
Phone
E-mail
fishel@ut.ee