Timezone: »

Novel Trends and Applications in Reinforcement Learning
Csaba Szepesvari · Marc Deisenroth (he/him) · Sergey Levine · Pedro Ortega · Brian Ziebart · Emma Brunskill · Naftali Tishby · Gerhard Neumann · Daniel Lee · Sridhar Mahadevan · Pieter Abbeel · David Silver · Vicenç Gómez

Sat Dec 13 05:30 AM -- 03:30 PM (PST) @ Level 5, room 512 a, e
Event URL: http://tcrl14.wordpress.com/ »

The last decade has witnessed a series of technological advances: social networks, cloud servers, personalized advertising, autonomous cars, personalized healthcare, robotics, security systems, just to name a few. These new technologies have in turn substantially reshaped our demands from adaptive reinforcement learning systems, defining novel yet urgent challenges. In response, a wealth of novel ideas and trends have emerged, tackling problems such as modelling rich and high-dimensional dynamics, life-long learning, resource-bounded planning, and multi-agent cooperation.

The objective of the workshop is to provide a platform for researchers from various areas (e.g., deep learning, game theory, robotics, computational neuroscience, information theory, Bayesian modelling) to disseminate and exchange ideas, evaluating their advantages and caveats. In particular, we will ask participants to address the following questions:

1) What is the future of reinforcement learning?
2) What are the most important challenges?
3) What tools do we need the most?

A final panel discussion will then review the provided answers and focus on elaborating a list of trends and future challenges. Recent advances will be presented in short talks and a poster session based on contributed material.

Author Information

Csaba Szepesvari (University of Alberta)
Marc Deisenroth (he/him) (Imperial College London)

Professor Marc Deisenroth is the DeepMind Chair in Artificial Intelligence at University College London and the Deputy Director of UCL's Centre for Artificial Intelligence. He also holds a visiting faculty position at the University of Johannesburg and Imperial College London. Marc's research interests center around data-efficient machine learning, probabilistic modeling and autonomous decision making. Marc was Program Chair of EWRL 2012, Workshops Chair of RSS 2013, and EXPO-Co-Chair of ICML 2020. In 2019, Marc co-organized the Machine Learning Summer School in London. He received Paper Awards at ICRA 2014, ICCAS 2016, and ICML 2020. He is co-author of the book [Mathematics for Machine Learning](https://mml-book.github.io) published by Cambridge University Press (2020).

Sergey Levine (UC Berkeley)
Pedro Ortega (DeepMind)
Brian Ziebart (University of Illinois at Chicago)
Emma Brunskill (CMU)
Naftali Tishby (The Hebrew University Jerusalem)

Naftali Tishby, is a professor of computer science and the director of the Interdisciplinary Center for Neural Computation (ICNC) at the Hebrew university of Jerusalem. He received his Ph.D. in theoretical physics from the Hebrew University and was a research staff member at MIT and Bell Labs from 1985 to 1991. He was also a visiting professor at Princeton NECI, the University of Pennsylvania and the University of California at Santa Barbara. Dr. Tishby is a leader of machine learning research and computational neuroscience. He was among the first to introduce methods from statistical physics into learning theory, and dynamical systems techniques in speech processing. His current research is at the interface between computer science, statistical physics and computational neuroscience and concerns the foundations of biological information processing and the connections between dynamics and information.

Gerhard Neumann (University of Lincoln)
Daniel Lee (Cornell Tech)
Sridhar Mahadevan (UMass Amherst)
Pieter Abbeel (UC Berkeley & Covariant)

Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Director of the Berkeley AI Research (BAIR) Lab, Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse venture fund, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

David Silver (DeepMind)
Vicenç Gómez (Universitat Pompeu Fabra)

More from the Same Authors