Timezone: »
Top-performing Model-Based Reinforcement Learning (MBRL) agents, such as Dreamer, learn the world model by reconstructing the image observations. Hence, they often fail to discard task-irrelevant details and struggle to handle visual distractions. To address this issue, previous work has proposed to contrastively learn the world model, but the performance tends to be inferior in the absence of distractions. In this paper, we seek to enhance robustness to distractions for MBRL agents by learning better representations in the world model. For this, prototypical representations seem to be a good candidate, as they have yielded more accurate and robust results than contrastive approaches in computer vision. However, it remains elusive how prototypical representations can benefit temporal dynamics learning in MBRL, since they treat each image independently without capturing temporal structures. To this end, we propose to learn the prototypes from the recurrent states of the world model, thereby distilling temporal structures from past observations and actions into the prototypes. The resulting model, DreamerPro, successfully combines Dreamer with prototypes, making large performance gains on the DeepMind Control suite both in the standard setting and when there are complex background distractions.
Author Information
Fei Deng (Rutgers University)
Ingook Jang (Korea Advanced Institute of Science and Technology)
Sungjin Ahn (KAIST)
More from the Same Authors
-
2021 : Stochastic Video Prediction with Perceptual Loss »
Donghun Lee · Ingook Jang · Seonghyun Kim · Chanwon Park · JUN HEE PARK -
2021 : TransDreamer: Reinforcement Learning with Transformer World Models »
· Jaesik Yoon · Yi-Fu Wu · Sungjin Ahn -
2021 : Learning Representations for Zero-Shot Image Generation without Text »
Gautam Singh · Fei Deng · Sungjin Ahn -
2021 : Stochastic Video Prediction with Perceptual Loss »
Donghun Lee · Ingook Jang · Seonghyun Kim · Chanwon Park · JUN HEE PARK -
2020 : Invited Talk: Sungjin Ahn »
Sungjin Ahn -
2020 Poster: Generative Neurosymbolic Machines »
Jindong Jiang · Sungjin Ahn -
2020 Spotlight: Generative Neurosymbolic Machines »
Jindong Jiang · Sungjin Ahn -
2019 Poster: Variational Temporal Abstraction »
Taesup Kim · Sungjin Ahn · Yoshua Bengio -
2019 Poster: Neural Multisensory Scene Inference »
Jae Hyun Lim · Pedro O. Pinheiro · Negar Rostamzadeh · Chris Pal · Sungjin Ahn -
2019 Poster: Sequential Neural Processes »
Gautam Singh · Jaesik Yoon · Youngsung Son · Sungjin Ahn -
2019 Spotlight: Sequential Neural Processes »
Gautam Singh · Jaesik Yoon · Youngsung Son · Sungjin Ahn -
2018 Poster: Bayesian Model-Agnostic Meta-Learning »
Jaesik Yoon · Taesup Kim · Ousmane Dia · Sungwoong Kim · Yoshua Bengio · Sungjin Ahn -
2018 Spotlight: Bayesian Model-Agnostic Meta-Learning »
Jaesik Yoon · Taesup Kim · Ousmane Dia · Sungwoong Kim · Yoshua Bengio · Sungjin Ahn