Timezone: »

 
Oral
Learning Generative Models with Visual Attention
Charlie Tang · Nitish Srivastava · Russ Salakhutdinov

Tue Dec 09 01:40 PM -- 02:00 PM (PST) @ Level 2, room 210

Attention has long been proposed by psychologists to be important for efficiently dealing with the massive amounts of sensory stimulus in the neocortex. Inspired by the attention models in visual neuroscience and the need for object-centered data for generative models, we propose a deep-learning based generative framework using attention. The attentional mechanism propagates signals from the region of interest in a scene to an aligned canonical representation for generative modeling. By ignoring scene background clutter, the generative model can concentrate its resources on the object of interest. A convolutional neural net is employed to provide good initializations during posterior inference which uses Hamiltonian Monte Carlo. Upon learning images of faces, our model can robustly attend to the face region of novel test subjects. More importantly, our model can learn generative models of new faces from a novel dataset of large images where the face locations are not known.

Author Information

Charlie Tang (Apple Inc.)
Nitish Srivastava (Apple Inc)
Russ Salakhutdinov (Carnegie Mellon University)

More from the Same Authors