Timezone: »

 
Oral
Non-delusional Q-learning and value-iteration
Tyler Lu · Dale Schuurmans · Craig Boutilier

Thu Dec 06 01:25 PM -- 01:40 PM (PST) @ Room 220 CD

We identify a fundamental source of error in Q-learning and other forms of dynamic programming with function approximation. Delusional bias arises when the approximation architecture limits the class of expressible greedy policies. Since standard Q-updates make globally uncoordinated action choices with respect to the expressible policy class, inconsistent or even conflicting Q-value estimates can result, leading to pathological behaviour such as over/under-estimation, instability and even divergence. To solve this problem, we introduce a new notion of policy consistency and define a local backup process that ensures global consistency through the use of information sets---sets that record constraints on policies consistent with backed-up Q-values. We prove that both the model-based and model-free algorithms using this backup remove delusional bias, yielding the first known algorithms that guarantee optimal results under general conditions. These algorithms furthermore only require polynomially many information sets (from a potentially exponential support). Finally, we suggest other practical heuristics for value-iteration and Q-learning that attempt to reduce delusional bias.

Author Information

Tyler Lu (Google)
Dale Schuurmans (Google Inc.)
Craig Boutilier (Google)

More from the Same Authors