Workshop
Deep Reinforcement Learning
David Silver · Satinder Singh · Pieter Abbeel · Peter Chen

Fri Dec 9th 08:00 AM -- 06:30 PM @ Area 1
Event URL: https://sites.google.com/site/deeprlnips2016/ »

Although the theory of reinforcement learning addresses an extremely general class of learning problems with a common mathematical formulation, its power has been limited by the need to develop task-specific feature representations. A paradigm shift is occurring as researchers figure out how to use deep neural networks as function approximators in reinforcement learning algorithms; this line of work has yielded remarkable empirical results in recent years. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help researchers with expertise in one of these fields to learn about the other.

09:00 AM Rich Sutton (Invited Speaker) Richard Sutton
09:30 AM Contributed Talks - Session 1 (Contributed Talks)
10:00 AM John Schulman (Invited Speaker) John Schulman
11:00 AM Raia Hadsell (Invited Speaker) Raia Hadsell
11:30 AM Contributed Talks - Session 2 (Contributed Talks)
12:00 PM Chelsea Finn (Invited Speaker) Chelsea Finn
12:30 PM Lunch (Break)
01:30 PM Nando De Freitas (Invited Speaker) Nando de Freitas
02:00 PM Contributed Talks - Session 3 (Contributed Talks)
02:30 PM Posters - Session 1 (Posters)
03:00 PM Coffee Break (Break)
03:30 PM Late Breaking Talk (Talk)
03:45 PM Junhyuk Oh (Invited Speaker) Junhyuk Oh
04:15 PM Josh Tenenbaum (Invited Speaker) Josh Tenenbaum
04:45 PM Panel Discussion (Discussion Panel)
05:30 PM Posters - Session 2 (Posters)

Author Information

David Silver (Google DeepMind)
Satinder Singh (University of Michigan)
Pieter Abbeel (UC Berkeley | Gradescope | Covariant)

Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

Peter Chen (covariant.ai)

More from the Same Authors