Poster
MoVie: Visual Model-Based Policy Adaptation for View Generalization
Sizhe Yang · Yanjie Ze · Yanjie Ze · Huazhe Xu
Great Hall & Hall B1+B2 (level 1) #1407
Abstract:
Visual Reinforcement Learning (RL) agents trained on limited views face significant challenges in generalizing their learned abilities to unseen views. This inherent difficulty is known as the problem of . In this work, we systematically categorize this fundamental problem into four distinct and highly challenging scenarios that closely resemble real-world situations. Subsequently, we propose a straightforward yet effective approach to enable successful adaptation of visual del-based policies for w generalization () during test time, without any need for explicit reward signals and any modification during training time. Our method demonstrates substantial advancements across all four scenarios encompassing a total of tasks sourced from DMControl, xArm, and Adroit, with a relative improvement of %, %, and % respectively. The superior results highlight the immense potential of our approach for real-world robotics applications. Code and videos are available at https://yangsizhe.github.io/MoVie/.
Chat is not available.