Workshop
Intuitive Physics
Adam Lerer · Jiajun Wu · Josh Tenenbaum · Emmanuel Dupoux · Rob Fergus

Fri Dec 9th 08:00 AM -- 06:30 PM @ Hilton Diag. Mar, Blrm. C
Event URL: http://phys.csail.mit.edu »

Despite recent progress, AI is still far away from achieving common sense reasoning. One area that is gathering a lot of interest is that of intuitive or naive physics. It concerns the ability that humans and, to a certain extent, infants and animals have to predict outcomes of physical interactions involving macroscopic objects. There is extensive experimental evidence that infants can predict the outcome of events based on physical concepts such as gravity, solidity, object permanence and conservation of shape and number, at an early stage of development, although there is also evidence that this capacity develops through time and experience. Recent work has attempted to build neural models that can make predictions about stability, collisions, forces and velocities from images or videos, or interactions with an environment. Such models could be both used to understand the cognitive and neural underpinning of naive physics in humans, but also to provide with AI applications more better inference and reasoning abilities.

This workshop will bring together researchers in machine learning, computer vision, robotics, computational neuroscience, and cognitive development to discuss artificial systems that capture or model intuitive physics by learning from footage of, or interactions with a real or simulated environment. There will be invited talks from world leaders in the fields, presentations and poster sessions based on contributed papers, and a panel discussion.

Topics of discussion will include:
- Learning models of Newtonian physics (deep networks, structured probabilistic generative models, physics engines)
- How to combine model-based and bottom-up approaches to intuitive physics
- Application of intuitive physics models to higher-level tasks such as navigation, video prediction, robotics, etc.
- How cognitive science and computational neuroscience literature may inform the design of artificial systems for physical prediction
- Methodology for comparing models of infant learning with clinical studies
- Development of new datasets or platforms for intuitive physics and visual commonsense

08:40 AM Opening Remarks (Talk) Josh Tenenbaum
09:00 AM Naive Physics 101: A Tutorial (Talk) Emmanuel Dupoux, Josh Tenenbaum
09:30 AM Poster Spotlights (Spotlight)
10:00 AM Ali Farhadi (Talk) Ali Farhadi
10:30 AM Coffee Break / Posters 1 (Poster Session)
11:00 AM Peter Battaglia (Talk) peter battaglia
11:30 AM Jitendra Malik and Pulkit Agrawal (Talk) Jitendra Malik, Pulkit Agrawal
12:00 PM Lunch Break (Break)
02:00 PM Abhinav Gupta (Talk) Abhinav Gupta
02:30 PM Bill Freeman (Talk) Bill Freeman
03:00 PM Coffee Break / Posters 2 (Poster Session)
03:30 PM Imagination-Based Decision Making with Physical Models in Deep Neural Networks (Talk) Jessica B Hamrick
03:50 PM Visual Stability Prediction and Its Application to Manipulation (Talk) Wenbin Li
04:10 PM Deep Visual Foresight for Planning Robot Motion (Talk) Chelsea Finn
04:30 PM Datasets, Methodology, and Challenges in Intuitive Physics (Talk) Emmanuel Dupoux, Josh Tenenbaum
05:30 PM Panel Discussion <span> <a href="#"></a> </span>

Author Information

Adam Lerer (Facebook AI Research)
Jiajun Wu (MIT)

Jiajun Wu is a fifth-year Ph.D. student at Massachusetts Institute of Technology, advised by Professor Bill Freeman and Professor Josh Tenenbaum. His research interests lie on the intersection of computer vision, machine learning, and computational cognitive science. Before coming to MIT, he received his B.Eng. from Tsinghua University, China, advised by Professor Zhuowen Tu. He has also spent time working at research labs of Microsoft, Facebook, and Baidu.

Josh Tenenbaum (MIT)

Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).

Emmanuel Dupoux (Ecole des Hautes Etudes en Sciences Sociales)
Rob Fergus (Facebook AI Research)

More from the Same Authors