Skip to yearly menu bar Skip to main content

Workshop: 4th Workshop on Self-Supervised Learning: Theory and Practice

Bridging State and History Representations: Understanding Self-Predictive RL

Tianwei Ni · Benjamin Eysenbach · Erfan Seyedsalehi · Michel Ma · Clement Gehring · Aditya Mahajan · Pierre-Luc Bacon


Representations are at the core of all deep reinforcement learning (RL) methods for both Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Many representation learning methods and theoretical frameworks have been developed to understand what constitutes an effective representation. However, the relationships between these methods and the shared properties among them remain unclear. In this paper, we show that many of these seemingly distinct methods and frameworks for state and history abstractions are, in fact, based on a common idea of self-predictive abstraction. Furthermore, we provide theoretical insights into the widely adopted stop-gradient technique for learning self-predictive representations.

Chat is not available.