Skip to yearly menu bar Skip to main content


Poster

Robot Policy Learning with Temporal Optiaml Transport Reward

Yuwei Fu · Haichao Zhang · Di Wu · Wei Xu · Benoit Boulet

West Ballroom A-D #6606
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Reward specification is one of the most tricky problems in Reinforcement Learning (RL), which usually requires tedious hand engineering in practice. One promising approach to tackle this challenge is to adopt existing expert video demonstrations for policy learning. Some recent work investigates how to learn robot policies from only a single/few expert video demonstrations. For example, reward labeling via Optimal Transport (OT) has been shown to be an effective strategy to generate a proxy reward by measuring the alignment between the robot trajectory and the expert demonstrations. However, previous work mostly overlooks that the OT reward is invariant to order/temporal information, which could brings extra noise to the reward signal. To address this issue, in this paper, we introduce the Temporal Optimal Transport Reward (TemporalOT) to incorporate temporal information for learning a more accurate OT-based proxy reward. Extensive experiments on the Meta-world benchmark tasks validate the efficacy of the proposed method. Our code will be released at: https://github.com/Anonymous/TemporalOT-RL.

Live content is unavailable. Log in and register to view live content