Timezone: »

Variational Autoencoder for Deep Learning of Images, Labels and Captions
Yunchen Pu · Zhe Gan · Ricardo Henao · Xin Yuan · Chunyuan Li · Andrew Stevens · Lawrence Carin

Wed Dec 07 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #78

A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.

Author Information

Yunchen Pu (Duke University)
Zhe Gan (Duke)
Ricardo Henao (Duke University)
Xin Yuan (Bell Labs)
Chunyuan Li (Duke)

Chunyuan is a PhD student at Duke University, affiliated with department of Electrical and Computer Engineering, advised by Prof. Lawrence Carin. His recent research interests focus on scalable Bayesian methods for deep learning, including generative models and reinforcement learning, with applications to computer vision and natural language processing.

Andrew Stevens (Duke University)
Lawrence Carin (KAUST)

More from the Same Authors