Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 14 08:00 AM -- 06:00 PM (PST) @ West 211 - 214
Learning Transferable Skills
Marwan Mattar · Arthur Juliani · Danny Lange · Matthew Crosby · Benjamin Beyret





Workshop Home Page

After spending several decades on the margin of AI, reinforcement learning has recently emerged as a powerful framework for developing intelligent systems that can solve complex tasks in real-world environments. This has had a tremendous impact on a wide range of tasks ranging from playing games such as Go and StarCraft to learning dexterity. However, one attribute of intelligence that still eludes modern learning systems is generalizability. Until very recently, the majority of reinforcement learning research involved training and testing algorithms on the same, sometimes deterministic, environment. This has resulted in algorithms that learn policies that typically perform poorly when deployed in environments that differ, even slightly, from those they were trained on. Even more importantly, the paradigm of task-specific training results in learning systems that scale poorly to a large number of (even interrelated) tasks.

Recently there has been an enduring interest in developing learning systems that can learn transferable skills. This could mean robustness to changing environment dynamics, the ability to quickly adapt to environment and task variations or the ability to learn to perform multiple tasks at once (or any combination thereof). This interest has also resulted in a number of new data sets and challenges (e.g. Obstacle Tower Environment, Animal-AI, CoinRun) and an urgency to standardize the metrics and evaluation protocols to better assess the generalization abilities of novel algorithms. We expect this area to continue to increase in popularity and importance, but this can only happen if we manage to build consensus on which approaches are promising, and, equally important, how to test them.

The workshop will include a mix of invited speakers, accepted papers (oral and poster sessions) and a panel discussion. The workshop welcomes both theoretical and applied research, in addition to novel data sets and evaluation protocols.

Opening Remarks (Announcement)
Challenges of Deep RL in Complex Environments (Invited Talk)
Coffee Break (Break)
Environments and Data Sets (Invited Talk)
Vladlen Koltun (Intel) (Invited Talk)
Lunch (Break)
Innate Bodies, Innate Brains, and Innate World Models (Invited Talk)
Oral Presentations (Talk)
Poster Presentations (Poster Session)
Multi-Task Reinforcement Learning and Generalization (Invited Talk)
Solving Rubik’s Cube with a Robot Hand (Invited Talk)
Closing Remarks (Announcement)