Exotic latent spaces for sequence autoencoders

Organization
Wise Plc
Abstract
Exotic latent spaces for sequence autoencoders
Variational Autoencoders are a staple of neural network architectures, useful for distilling the essential characteristics of the input data into a constant-dimension 'embedding' in the latent space. The 'Variational' part refers to the variational approximation of the posterior distribution in the latent space, when interpreted in a Bayesian fashion. This thesis will look at autoencoders of variable-length sequences (such as sequences of transactions at an online service) and compare the performance of the variational approximation to at least one of two other latent space approaches: firstly, an Information Bottleneck constraint, which does away with the crude Gaussian prior assumption of the VAE, and secondly, a hyperbolic latent space geometry, which has been claimed to better represent the degree of uncertainty in the model's predictions. Starting point will be the sequence VAE as implemented in https://github.com/transferwise/neural-lifetimes (which includes a draft implementation of Information Bottleneck). A reasonably high level of math knowledge is a prerequisite. https://arxiv.org/pdf/2101.01600.pdf, https://arxiv.org/abs/2005.01123
Graduation Theses defence year
2022-2023
Supervisor
Egor Kraev (Wise), Prof.Meelis Kull
Spoken language (s)
English
Requirements for candidates
Level
Masters
Keywords

Application of contact

 
Name
Egor Kraev
Phone
E-mail
egor.kraev@wise.com