Timezone: »
We propose to address quadrupedal locomotion tasks using Reinforcement Learning (RL) with a Transformer-based model that learns to combine proprioceptive information and high-dimensional depth sensor inputs. While learning-based locomotion has made great advances using RL, most methods still rely on domain randomization for training blind agents that generalize to challenging terrains. Our key insight is that proprioceptive states only offer contact measurements for immediate reaction, whereas an agent equipped with visual sensory observations can learn to proactively maneuver environments with obstacles and uneven terrain by anticipating changes in the environment many steps ahead. In this paper, we introduce LocoTransformer, an end-to-end RL method that leverages both proprioceptive states and visual observations for locomotion control. We evaluate our method in challenging simulated environments with different obstacles and uneven terrain. We transfer our learned policy from simulation to a real robot by running it indoor and in-the-wild with unseen obstacles and terrain. Our method not only significantly improves over baselines, but also achieves far better generalization performance, especially when transferred to the real robot. Our project page with videos is at https://LocoTransformer.github.io/.
Author Information
Ruihan Yang (UC San Diego)
Minghao Zhang (Tsinghua University)
Nicklas Hansen (UC San Diego)
Huazhe Xu (UC Berkeley)
Xiaolong Wang (UC San Diego)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Dates n/a. Room
More from the Same Authors
-
2021 : Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization »
Minghao Zhang · Ruihan Yang · Yuzhe Qin · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotion End-to-End with Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Huazhe Xu · Xiaolong Wang -
2021 : Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation »
Rishabh Jangir · Nicklas Hansen · Xiaolong Wang -
2021 : Vision-Guided Quadrupedal Locomotion in the Wild with Multi-Modal Delay Randomization »
Chieko Imai · Minghao Zhang · Ruihan Yang · Yuzhe Qin · Xiaolong Wang -
2021 : Extraneousness-Aware Imitation Learning »
Ray Zheng · Kaizhe Hu · Boyuan Chen · Huazhe Xu -
2021 : Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation »
Rishabh Jangir · Nicklas Hansen · Mohit Jain · Xiaolong Wang -
2022 : On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning »
yifan xu · Nicklas Hansen · Zirui Wang · Yung-Chieh Chan · Hao Su · Zhuowen Tu -
2022 : Category-Level 6D Object Pose Estimation in the Wild: A Semi-Supervised Learning Approach and A New Dataset »
Yang Fu · Xiaolong Wang -
2022 : Generalizable Point Cloud Reinforcement Learning for Sim-to-Real Dexterous Manipulation »
Yuzhe Qin · Binghao Huang · Zhao-Heng Yin · Hao Su · Xiaolong Wang -
2022 : Visual Reinforcement Learning with Self-Supervised 3D Representations »
Yanjie Ze · Nicklas Hansen · Yinbo Chen · Mohit Jain · Xiaolong Wang -
2022 : MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations »
Nicklas Hansen · Yixin Lin · Hao Su · Xiaolong Wang · Vikash Kumar · Aravind Rajeswaran -
2022 : Graph Inverse Reinforcement Learning from Diverse Videos »
Sateesh Kumar · Jonathan Zamora · Nicklas Hansen · Rishabh Jangir · Xiaolong Wang -
2022 : On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning »
yifan xu · Nicklas Hansen · Zirui Wang · Yung-Chieh Chan · Hao Su · Zhuowen Tu -
2023 Poster: H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation »
Yanjie Ze · Yuyao Liu · Ruizhe Shi · Jiaxin Qin · Zhecheng Yuan · Jiashun Wang · Xiaolong Wang · Huazhe Xu -
2023 Poster: Elastic Decision Transformer »
Yueh-Hua Wu · Xiaolong Wang · Masashi Hamaya -
2023 Poster: RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization »
Zhecheng Yuan · Sizhe Yang · Pu Hua · Can Chang · Kaizhe Hu · Xiaolong Wang · Huazhe Xu -
2022 Workshop: Self-Supervised Learning: Theory and Practice »
Ishan Misra · Pengtao Xie · Gul Varol · Yale Song · Yuki Asano · Xiaolong Wang · Pauline Luc -
2022 Poster: Category-Level 6D Object Pose Estimation in the Wild: A Semi-Supervised Learning Approach and A New Dataset »
Yang Fu · Xiaolong Wang -
2021 : Spotlights »
Hager Radi · Krishan Rana · Yunzhu Li · Shuang Li · Gal Leibovich · Guy Jacob · Ruihan Yang -
2021 Poster: Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2021 Poster: Multi-Person 3D Motion Prediction with Multi-Range Transformers »
Jiashun Wang · Huazhe Xu · Medhini Narasimhan · Xiaolong Wang -
2021 Poster: NovelD: A Simple yet Effective Exploration Criterion »
Tianjun Zhang · Huazhe Xu · Xiaolong Wang · Yi Wu · Kurt Keutzer · Joseph Gonzalez · Yuandong Tian -
2020 Poster: Online Adaptation for Consistent Mesh Reconstruction in the Wild »
Xueting Li · Sifei Liu · Shalini De Mello · Kihwan Kim · Xiaolong Wang · Ming-Hsuan Yang · Jan Kautz -
2020 Poster: Multi-Task Reinforcement Learning with Soft Modularization »
Ruihan Yang · Huazhe Xu · YI WU · Xiaolong Wang -
2020 Poster: Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning »
Guangxiang Zhu · Minghao Zhang · Honglak Lee · Chongjie Zhang -
2018 : Coffee Break 1 (Posters) »
Ananya Kumar · Siyu Huang · Huazhe Xu · Michael Janner · Parth Chadha · Nils Thuerey · Peter Lu · Maria Bauza · Anthony Tompkins · Guanya Shi · Thomas Baumeister · André Ofner · Zhi-Qi Cheng · Yuping Luo · Deepika Bablani · Jeroen Vanbaar · Kartic Subr · Tatiana López-Guevara · Devesh Jha · Fabian Fuchs · Stefano Rosa · Alison Pouplin · Alex Ray · Qi Liu · Eric Crawford