Workshop
Deep Reinforcement Learning
Pieter Abbeel · John Schulman · Satinder Singh · David Silver

Fri Dec 11th 08:30 AM -- 06:30 PM @ 513 cd
Event URL: http://rll.berkeley.edu/deeprlworkshop »

Although the theory of reinforcement learning addresses an extremely general class of learning problems with a common mathematical formulation, its power has been limited by the need to develop task-specific feature representations. A paradigm shift is occurring as researchers figure out how to use deep neural networks as function approximators in reinforcement learning algorithms; this line of work has yielded remarkable empirical results in recent years. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help researchers with expertise in one of these fields to learn about the other.

09:00 AM Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning (Talk) Honglak Lee
09:30 AM On General Problem Solving and How to Learn an Algorithm (Talk) Jürgen Schmidhuber
11:00 AM The Deep Reinforcement Learning Boom (Talk) Volodymyr Mnih
11:30 AM Deep RL in Games Research (Talk) Gerald Tesauro
12:00 PM Osaro (Talk) Itamar Arel
02:00 PM Deep Robotic Learning (Talk) Sergey Levine
02:30 PM RL for DL (Talk) Yoshua Bengio
05:00 PM Deep RL for Learning Machines - How to do Deep RL in Real World (Talk) Martin Riedmiller
05:30 PM Compressed Neural Networks for RL (Talk) Jan Koutnik

Author Information

Pieter Abbeel (UC Berkeley)

Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

John Schulman (UC Berkeley)

John is a research scientist at OpenAI. Previously he was in the computer science PhD program at UC Berkeley, and before that he studied physics at Caltech. His research focuses on reinforcement learning, where he strives to develop systems that can match the impressive skills of mammals and birds for locomotion, navigation, and manipulation; and he is especially interested in applications in robotics. He previously performed research in (and is still interested in) neuroscience. Outside of work, he enjoys reading, running, and listening to jazz music.

Satinder Singh (University of Michigan)
David Silver (DeepMind)

More from the Same Authors