Timezone: »

Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis
Jimei Yang · Scott E Reed · Ming-Hsuan Yang · Honglak Lee

Mon Dec 07 04:00 PM -- 08:59 PM (PST) @ 210 C #7

An important problem for both graphics and vision is to synthesize novel views of a 3D object from a single image. This is in particular challenging due to the partial observability inherent in projecting a 3D object onto the image space, and the ill-posedness of inferring object shape and pose. However, we can train a neural network to address the problem if we restrict our attention to specific object classes (in our case faces and chairs) for which we can gather ample training data. In this paper, we propose a novel recurrent convolutional encoder-decoder network that is trained end-to-end on the task of rendering rotated objects starting from a single image. The recurrent structure allows our model to capture long- term dependencies along a sequence of transformations, and we demonstrate the quality of its predictions for human faces on the Multi-PIE dataset and for a dataset of 3D chair models, and also show its ability of disentangling latent data factors without using object class labels.

Author Information

Jimei Yang (UC Merced)
Scott E Reed (University of Michigan)
Ming-Hsuan Yang (UC Merced)
Honglak Lee (U. Michigan)

More from the Same Authors