Timezone: »

Improving Zero-Shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions
Bogdan Mazoure · Ilya Kostrikov · Ofir Nachum · Jonathan Tompson

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #105

Reinforcement learning (RL) agents are widely used for solving complex sequential decision-making tasks, but still exhibit difficulty generalizing to scenarios not seen during training. While prior online approaches demonstrated that using additional signals beyond the reward function can lead to better generalization capabilities in RL agents, i.e. using self-supervised learning (SSL), they struggle in the offline RL setting, i.e. learning from a static dataset. We show that the performance of online algorithms for generalization in RL can be hindered in the offline setting due to poor estimation of similarity between observations. We propose a new theoretically-motivated framework called Generalized Similarity Functions (GSF), which uses contrastive learning to train an offline RL agent to aggregate observations based on the similarity of their expected future behavior, where we quantify this similarity using generalized value functions. We show that GSF is general enough to recover existing SSL objectives while improving zero-shot generalization performance on two complex pixel-based offline RL benchmarks.

Author Information

Bogdan Mazoure (McGill University, Google Brain)

Ph.D. student at MILA / McGill University, supervised by Doina Precup and Devon Hjelm. Interested in reinforcement learning, representation learning, mathematical statistics and density estimation.

Ilya Kostrikov (University of California Berkeley)
Ofir Nachum (Google Brain)
Jonathan Tompson (Google Brain)

More from the Same Authors