Real Robot Challenge III - Learning Dexterous Manipulation from Offline Data in the Real World

Nico Gürtler · Georg Martius · Sebastian Blaes · Pavel Kolev · Cansu Sancaktar · Stefan Bauer · Manuel Wuethrich · Markus Wulfmeier · Martin Riedmiller · Arthur Allshire · Annika Buchholz · Bernhard Schölkopf

[ Abstract ] [ Project Page ]
Tue 6 Dec 3 a.m. PST — 6 a.m. PST


In this year's Real Robot Challenge, the participants will apply offline reinforcement learning (RL) to robotics datasets and evaluate their policies remotely on a cluster of real TriFinger robots. Usually, experimentation on real robots is quite costly and challenging. For this reason, a large part of the RL community uses simulators to develop and benchmark algorithms. However, insights gained in simulation do not necessarily translate to real robots, in particular for tasks involving complex interaction with the environment. The purpose of this competition is to alleviate this problem by allowing participants to experiment remotely with a real robot - as easily as in simulation. In the last two years, offline RL algorithms became increasingly popular and capable. This year’s Real Robot Challenge provides a platform for evaluation, comparison and showcasing the performance of these algorithms on real-world data. In particular, we propose a dexterous manipulation problem that involves pushing, grasping and in-hand orientation of blocks.