Skip to yearly menu bar Skip to main content


Poster

Non-delusional Q-learning and value-iteration

Tyler Lu · Dale Schuurmans · Craig Boutilier

Room 517 AB #162

Keywords: [ Reinforcement Learning and Planning ] [ Decision and Control ] [ Reinforcement Learning ] [ Markov Decision Processes ] [ Planning ]


Abstract:

We identify a fundamental source of error in Q-learning and other forms of dynamic programming with function approximation. Delusional bias arises when the approximation architecture limits the class of expressible greedy policies. Since standard Q-updates make globally uncoordinated action choices with respect to the expressible policy class, inconsistent or even conflicting Q-value estimates can result, leading to pathological behaviour such as over/under-estimation, instability and even divergence. To solve this problem, we introduce a new notion of policy consistency and define a local backup process that ensures global consistency through the use of information sets---sets that record constraints on policies consistent with backed-up Q-values. We prove that both the model-based and model-free algorithms using this backup remove delusional bias, yielding the first known algorithms that guarantee optimal results under general conditions. These algorithms furthermore only require polynomially many information sets (from a potentially exponential support). Finally, we suggest other practical heuristics for value-iteration and Q-learning that attempt to reduce delusional bias.

Live content is unavailable. Log in and register to view live content