Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Goal-Conditioned Reinforcement Learning

Contrastive Difference Predictive Coding

Chongyi Zheng · Russ Salakhutdinov · Benjamin Eysenbach

Keywords: [ temporal difference ] [ goal-reaching ] [ goal-conditioned RL ] [ Reinforcement Learning ] [ contrastive learning ]


Abstract:

Predicting and reasoning about the future lies at the heart of many time-series questions. For example, goal-conditioned reinforcement learning can be viewed as learning representations to predict which states are likely to be visited in the future. While prior methods have used contrastive predictive coding to model time series data, learning representations that encode long-term dependencies usually requires large amounts of data. In this paper, we introduce a temporal difference version of contrastive predictive coding that stitching together pieces of different time series data to decrease the amount of data required to learn to predict future events. We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL. Experiments demonstrate that, compared with prior RL methods, ours achieves higher success rates with less data, and can better cope with stochastic environments.

Chat is not available.