Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Goal-Conditioned Reinforcement Learning

Numerical Goal-based Transformers for Practical Conditions

Seonghyun Kim · Samyeul Noh · Ingook Jang

Keywords: [ goal-conditioned reinforcement learning ] [ conservative reward estimation ] [ numerical goal-conditioned transformer ]


Abstract:

Goal-conditioned reinforcement learning (GCRL) studies aim to apply trained agents in realistic environments. In particular, offline reinforcement learning is being studied as a way to reduce the cost of online interactions in GCRL. One such method is Decision Transformer (DT), which utilizes a numerical goal called "return-to-go" for superior performance. Since DT assumes an idealized environment, such as perfect knowledge of rewards, it is necessary to study an improved approach for real-world applications. In this work, we present various attempts and results for numerical goal-based transformers to operate under practical conditions.

Chat is not available.