Timezone: »
Spotlight
Learning Hierarchical Priors in VAEs
Alexej Klushyn · Nutan Chen · Richard Kurle · Botond Cseke · Patrick van der Smagt
We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution. To incentivise an informative latent representation of the data, we formulate the learning problem as a constrained optimisation problem by extending the Taming VAEs framework to two-level hierarchical models. We introduce a graph-based interpolation method, which shows that the topology of the learned latent representation corresponds to the topology of the data manifold---and present several examples, where desired properties of latent representation such as smoothness and simple explanatory factors are learned by the prior.
Author Information
Alexej Klushyn (ML Research Lab, Volkswagen Group)
Nutan Chen (Volkswagen Group)
Richard Kurle (Volkswagen Group)
Botond Cseke (Volkswagen Group)
Patrick van der Smagt (Volkswagen Group)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Learning Hierarchical Priors in VAEs »
Wed Dec 11th 01:30 -- 03:30 AM Room East Exhibition Hall B + C
More from the Same Authors
-
2020 Poster: Deep Rao-Blackwellised Particle Filters for Time Series Forecasting »
Richard Kurle · Syama Sundar Rangapuram · Emmanuel de Bézenac · Stephan Günnemann · Jan Gasthaus -
2020 Poster: Normalizing Kalman Filters for Multivariate Time Series Analysis »
Emmanuel de Bézenac · Syama Sundar Rangapuram · Konstantinos Benidis · Michael Bohlke-Schneider · Richard Kurle · Lorenzo Stella · Hilaf Hasson · Patrick Gallinari · Tim Januschowski