Joint Embeddings for Voices and Their Textual Descriptions

Name
Ivan Lastovko
Abstract
Embeddings are vector representations which is a highly effective method employed in machine learning to represent data in a more meaningful and efficient manner. In this study, we aim to implement vector representation for speakers' voices and corresponding textual descriptions, maximizing their cosine similarity. In other words, we want to build a system capable of representing both voices and descriptions of those voices as closely as possible in the multidimensional space. In our work, the data collection process involves using public datasets as well as manually annotated data. In order to conduct our research, we have utilized different training modes, such as standalone, where encoders are trained individually, and joint training techniques, where encoders are trained together to learn to adapt their outputs accordingly. We then evaluated the models on our control sample extracted from the manually collected dataset and assessed the quality of our annotations. We have also investigated the changes in cosine similarity between the speakers' and voice descriptions' vector representation with the decline in annotation quality.
Graduation Thesis language
English
Graduation Thesis type
Master - Computer Science
Supervisor(s)
Mark Fishel
Defence year
2023
 
PDF