Learning Disentangled Features: from Perception to Control
Emily Denton · Siddharth Narayanaswamy · Tejas Kulkarni · Honglak Lee · Diane Bouchacourt · Josh Tenenbaum · David Pfau

Sat Dec 9th 08:00 AM -- 06:30 PM @ 203
Event URL: »

An important facet of human experience is our ability to break down what we observe and interact with, along characteristic lines. Visual scenes consist of separate objects, which may have different poses and identities within their category. In natural language, the syntax and semantics of a sentence can often be separated from one another. In planning and cognition plans can be broken down into immediate and long term goals. Inspired by this much research in deep representation learning has gone into finding disentangled factors of variation. However, this research often lacks a clear definition of what disentangling is or much relation to work in other branches of machine learning, neuroscience or cognitive science. In this workshop we intend to bring a wide swathe of scientists studying disentangled representations under one roof to try to come to a unified view of the problem of disentangling.

The workshop will address these issues through 3 focuses:
What is disentangling: Are disentangled representations just the same as statistically independent representations, or is there something more? How does disentangling relate to interpretability? Can we define what it means to separate style and content, or is human judgement the final arbiter? Are disentangled representations the same as equivariant representations?
How can disentangled representations be discovered: What is the current state of the art in learning disentangled representations? What are the cognitive and neural underpinnings of disentangled representations in animals and humans? Most work in disentangling has focused on perception, but we will encourage dialogue with researchers in natural language processing and reinforcement learning as well as neuroscientists and cognitive scientists.
Why do we care about disentangling: What are the downstream tasks that can benefit from using disentangled representations? Does the downstream task define the relevance of the disentanglement to learn? What does disentangling get us in terms of improved prediction or behavior in intelligent agents?

08:30 AM Welcome: Josh Tenenbaum (Intro) Josh Tenenbaum
09:00 AM Stefano Soatto (Invited talk) Stefano Soatto
09:30 AM Irina Higgins (Invited talk) Irina Higgins
10:00 AM Finale Doshi-Velez (Invited talk) Finale Doshi-Velez
10:30 AM Poster session + Coffee break (Break)
11:00 AM Doris Tsao (Invited talk) Ying Tsao
11:30 AM Spotlight talks (Spotlight)
12:15 PM Lunch (Break)
02:00 PM Doina Precup (Invited talk) Doina Precup
02:30 PM Pushmeet Kohli (Invited talk) Pushmeet Kohli
03:00 PM Poster session + Coffee break (Poster session)
Mikael Kågebäck, Igor Melnyk, Amir-Hossein Karimi, Gino Brunner, Ershad Banijamali, Chris Donahue, Jake Zhao, Giambattista Parascandolo, Valentin Thomas, Abhishek Kumar, Chris Burgess, Amanda Nilsson, Maria Larsson, Cian Eastwood, Momchil Peychev
03:30 PM Yoshua Bengio (Invited talk) Yoshua Bengio
04:00 PM Ahmed Elgammal (Invited talk) Ahmed Elgammal
04:30 PM Final Poster Break (Break)
05:00 PM Panel discussion (Panel)

Author Information

Emily Denton (New York University)
Siddharth Narayanaswamy (University of Oxford)
Tejas Kulkarni (DeepMind)
Honglak Lee (Google / U. Michigan)
Diane Bouchacourt (Facebook)
Josh Tenenbaum (MIT)

Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).

David Pfau (DeepMind)

More from the Same Authors