Skip to yearly menu bar Skip to main content


Poster

Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction

Aviral Kumar · Justin Fu · George Tucker · Sergey Levine

East Exhibition Hall B + C #214

Keywords: [ Reinforcement Learning and Planning ] [ Reinforcement Learning ] [ Reinforcement Learning and Planning ]


Abstract:

Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify \emph{bootstrapping error} as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random data and suboptimal demonstrations, on a range of continuous control tasks.

Live content is unavailable. Log in and register to view live content