Skip to yearly menu bar Skip to main content


Invited Talk
in
Competition: The Robot Air Hockey Challenge: Robust, Reliable, and Safe Learning Techniques for Real-world Robotics

Making Real-World Reinforcement Learning Practical

Sergey Levine


Abstract:

Reinforcement learning offers an appealing formalism for autonomously acquiring robotic skills. Part of its appeal is its generality. However, practical robotic learning is not a perfect fit for the standard RL problem statement: from the obvious challenges with sample complexity and exploration to the deeper issues with lack of clearly specified reward functions and the practicality of episodic learning in a world that cannot be reset arbitrarily at will, making RL practical in robotics requires taking care to not only design algorithms that are efficient, but also accounting for the various practical aspects of the RL setup. This problem of "scaffolding" reinforcement learning itself involves numerous algorithmic challenges. In this talk, I will discuss some ways we can approach these challenges, from practical, safe, and reliable reinforcement learning that is efficient enough to run on real-world platforms, to automating reward function evaluation and resets.

Chat is not available.