NeurIPS 2023
Skip to yearly menu bar Skip to main content


Goal-Conditioned Reinforcement Learning

Benjamin Eysenbach · Ishan Durugkar · Jason Ma · Andi Peng · Tongzhou Wang · Amy Zhang

Room 206 - 207

Fri 15 Dec, 7 a.m. PST

Learning goal-directed behavior is one of the classical problems in AI, one that has received renewed interest in recent years and currently sits at the crossroads of many seemingly-disparate research threads: self-supervised learning , representation learning, probabilistic inference, metric learning, and duality.

Our workshop focuses on these goal-conditioned RL (GCRL) algorithms and their connections to different areas of machine learning. Goal-conditioned RL is exciting not just because of these theoretical connections with different fields, but also because it promises to lift some of the practical challenges with applying RL algorithms: users can specify desired outcomes with a single observation, rather than a mathematical reward function. As such, GCRL algorithms may be applied to problems varying from robotics to language models tuning to molecular design to instruction following.

Our workshop aims to bring together researchers studying the theory, methods, and applications of GCRL, researchers who might be well posed to answer questions such as:

1. How does goal-directed behavior in animals inform better GCRL algorithmic design?
2. How can GCRL enable more precise and customizable molecular generation?
3. Do GCRL algorithms provide an effective mechanism for causal reasoning?
4. When and how should GCRL algorithms be applied to precision medicine?

Chat is not available.
Timezone: America/Los_Angeles