Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Offline Reinforcement Learning Workshop: Offline RL as a "Launchpad"

Offline Reinforcement Learning on Real Robot with Realistic Data Sources

Gaoyue Zhou · Liyiming Ke · Siddhartha Srinivasa · Abhinav Gupta · Aravind Rajeswaran · Vikash Kumar


Abstract:

Offline Reinforcement Learning (ORL) provides a framework to train control policies from fixed sub-optimal datasets, making it suitable for safety-critical applications like robotics. Despite significant algorithmic advances and benchmarking in simulation, the evaluation of ORL algorithms on real-world robot learning tasks has been limited. Since real robots are sensitive to details like sensor noises, reset conditions, demonstration sources, and test time distribution, it remains a question whether ORL is a competitive solution to real robotic challenges and what would characterize such tasks. We aim to address this deficiency through an empirical study of representative ORL algorithms on four table-top manipulation tasks using a Franka-Panda robot arm. Our evaluation finds that for scenarios with sufficient in-domain data of high quality, specialized ORL algorithms can be competitive with the behavior cloning approach. However, for scenarios that require out-of-distribution generalization or task transfer, ORL algorithms can learn and generalize from offline heterogeneous datasets and outperform behavior cloning. Project URL: https://sites.google.com/view/real-orl-anon

Chat is not available.