Generating Photorealistic Images with β-Variational Autoencoder

Organization
Geometric Deep Learning
Abstract
Deep neural networks are very successful at automatic extraction of meaningful features from data. Manual feature engineering is often not required and are extracted from end-to-end training. The focus instead shifts to designing the architecture of the deep neural networks. But, because of complexity of deep neural networks, the extracted features are highly complex and noninterpretable by humans. The deep learning is treated as a black box and the emphasis is put on external evaluation metrics such as train and test error. For critical applications (autonomous driving, cybersecurity), it would be highly beneficial to understand what kinds of hidden (latent) representations the model has actually learned. Several methods exist in the literature for learning meaningful hidden representations. In this talk I will be mainly looking at the Variational Autoencoder (VAE), a deep generative model. VAEs with adjustable hyperparameters, have been shown to be able to disentangle simple data generating factors from highly complex input space. For example, when trained with images of faces, a VAE is able learn to encode the direction of the lighting in a single hidden variable.
Graduation Theses defence year
2022-2023
Supervisor
Kallol Roy
Spoken language (s)
English
Requirements for candidates
Level
Bachelor, Masters
Keywords
#β-vae, Variational Autoencoder

Application of contact

 
Name
KALLOL ROY
Phone
+37256051480
E-mail
kallol.roy@ut.ee
See more
https://sites.google.com/view/disentanglenips2017