This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence

Thomas Sutter, Imant Daunhawer, Julia Vogt

Poster Session 6 (more posters)
on Thu, Dec 10th, 2020 @ 17:00 – 19:00 GMT
Abstract: Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena. However, existing generative models that approximate a multimodal ELBO rely on difficult or inefficient training schemes to learn a joint distribution and the dependencies between modalities. In this work, we propose a novel, efficient objective function that utilizes the Jensen-Shannon divergence for multiple distributions. It simultaneously approximates the unimodal and joint multimodal posteriors directly via a dynamic prior. In addition, we theoretically prove that the new multimodal JS-divergence (mmJSD) objective optimizes an ELBO. In extensive experiments, we demonstrate the advantage of the proposed mmJSD model compared to previous work in unsupervised, generative learning tasks.

Preview Video and Chat

To see video, interact with the author and ask questions please use registration and login.