`

Timezone: »

 
Poster
Offline Reinforcement Learning as One Big Sequence Modeling Problem
Michael Janner · Qiyang Li · Sergey Levine

Wed Dec 08 04:30 PM -- 06:00 PM (PST) @ None #None

Reinforcement learning (RL) is typically viewed as the problem of estimating single-step policies (for model-free RL) or single-step models (for model-based RL), leveraging the Markov property to factorize the problem in time. However, we can also view RL as a sequence modeling problem: predict a sequence of actions that leads to a sequence of high rewards. Viewed in this way, it is tempting to consider whether powerful, high-capacity sequence prediction models that work well in other supervised learning domains, such as natural-language processing, can also provide simple and effective solutions to the RL problem. To this end, we explore how RL can be reframed as "one big sequence modeling" problem, using state-of-the-art Transformer architectures to model distributions over sequences of states, actions, and rewards. Addressing RL as a sequence modeling problem significantly simplifies a range of design decisions: we no longer require separate behavior policy constraints, as is common in prior work on offline model-free RL, and we no longer require ensembles or other epistemic uncertainty estimators, as is common in prior work on model-based RL. All of these roles are filled by the same Transformer sequence model. In our experiments, we demonstrate the flexibility of this approach across imitation learning, goal-conditioned RL, and offline RL.

Author Information

Michael Janner (UC Berkeley)
Qiyang Li (University of California, Berkeley)
Sergey Levine (University of Washington)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors