Timezone: »
Poster
A Mixture Of Surprises for Unsupervised Reinforcement Learning
Andrew Zhao · Matthieu Lin · Yangguang Li · Yong-jin Liu · Gao Huang
@
Unsupervised reinforcement learning aims at learning a generalist policy in a reward-free manner for fast adaptation to downstream tasks. Most of the existing methods propose to provide an intrinsic reward based on surprise. Maximizing or minimizing surprise drives the agent to either explore or gain control over its environment. However, both strategies rely on a strong assumption: the entropy of the environment's dynamics is either high or low. This assumption may not always hold in real-world scenarios, where the entropy of the environment's dynamics may be unknown. Hence, choosing between the two objectives is a dilemma. We propose a novel yet simple mixture of policies to address this concern, allowing us to optimize an objective that simultaneously maximizes and minimizes the surprise. Concretely, we train one mixture component whose objective is to maximize the surprise and another whose objective is to minimize the surprise. Hence, our method does not make assumptions about the entropy of the environment's dynamics. We call our method a $\textbf{M}\text{ixture }\textbf{O}\text{f }\textbf{S}\text{urprise}\textbf{S}$ (MOSS) for unsupervised reinforcement learning. Experimental results show that our simple method achieves state-of-the-art performance on the URLB benchmark, outperforming previous pure surprise maximization-based objectives. Our code is available at: https://github.com/LeapLabTHU/MOSS.
Author Information
Andrew Zhao (Tsinghua University)
Andrew Zhao is a PhD student at Tsinghua University. He obtained his masters degree from USC in 2020, and undergrad degree from UBC in 2017. His research interests are in machine learning and reinforcement learning.
Matthieu Lin (Tsinghua University, Tsinghua University)
Yangguang Li (SenseTime)
Yong-jin Liu (Tsinghua University, Tsinghua University)
Gao Huang (Cornell University)
More from the Same Authors
-
2022 Poster: Contrastive Language-Image Pre-Training with Knowledge Graphs »
Xuran Pan · Tianzhu Ye · Dongchen Han · Shiji Song · Gao Huang -
2022 Poster: Provable General Function Class Representation Learning in Multitask Bandits and MDP »
Rui Lu · Andrew Zhao · Simon Du · Gao Huang -
2022 Poster: Efficient Knowledge Distillation from Model Checkpoints »
Chaofei Wang · Qisen Yang · Rui Huang · Shiji Song · Gao Huang -
2022 : Fast-BEV: Towards Real-time On-vehicle Bird’s-Eye View Perception »
Bin Huang · Yangguang Li · Feng Liang · Enze Xie · Luya Wang · Mingzhu Shen · Fenggang Liu · Tianqi Wang · Ping Luo · Jing Shao -
2022 : Boosting Offline Reinforcement Learning via Data Resampling »
Yang Yue · Bingyi Kang · Xiao Ma · Zhongwen Xu · Gao Huang · Shuicheng Yan -
2023 Poster: Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning »
Shenzhi Wang · Qisen Yang · Jiawei Gao · Matthieu Lin · HAO CHEN · Liwei Wu · Ning Jia · Shiji Song · Gao Huang -
2022 Spotlight: Lightning Talks 4A-4 »
Yunhao Tang · LING LIANG · Thomas Chau · Daeha Kim · Junbiao Cui · Rui Lu · Lei Song · Byung Cheol Song · Andrew Zhao · Remi Munos · Łukasz Dudziak · Jiye Liang · Ke Xue · Kaidi Xu · Mark Rowland · Hongkai Wen · Xing Hu · Xiaobin Huang · Simon Du · Nicholas Lane · Chao Qian · Lei Deng · Bernardo Avila Pires · Gao Huang · Will Dabney · Mohamed Abdelfattah · Yuan Xie · Marc Bellemare -
2022 Spotlight: Provable General Function Class Representation Learning in Multitask Bandits and MDP »
Rui Lu · Andrew Zhao · Simon Du · Gao Huang -
2022 Spotlight: Lightning Talks 1B-3 »
Chaofei Wang · Qixun Wang · Jing Xu · Long-Kai Huang · Xi Weng · Fei Ye · Harsh Rangwani · shrinivas ramasubramanian · Yifei Wang · Qisen Yang · Xu Luo · Lei Huang · Adrian G. Bors · Ying Wei · Xinglin Pan · Sho Takemori · Hong Zhu · Rui Huang · Lei Zhao · Yisen Wang · Kato Takashi · Shiji Song · Yanan Li · Rao Anwer · Yuhei Umeda · Salman Khan · Gao Huang · Wenjie Pei · Fahad Shahbaz Khan · Venkatesh Babu R · Zenglin Xu -
2022 Spotlight: Efficient Knowledge Distillation from Model Checkpoints »
Chaofei Wang · Qisen Yang · Rui Huang · Shiji Song · Gao Huang -
2022 Poster: Latency-aware Spatial-wise Dynamic Networks »
Yizeng Han · Zhihang Yuan · Yifan Pu · Chenhao Xue · Shiji Song · Guangyu Sun · Gao Huang -
2016 Poster: Supervised Word Mover's Distance »
Gao Huang · Chuan Guo · Matt J Kusner · Yu Sun · Fei Sha · Kilian Weinberger -
2016 Oral: Supervised Word Mover's Distance »
Gao Huang · Chuan Guo · Matt J Kusner · Yu Sun · Fei Sha · Kilian Weinberger