Timezone: »

 
Poster
A Maximum-Entropy Approach to Off-Policy Evaluation in Average-Reward MDPs
Nevena Lazic · Dong Yin · Mehrdad Farajtabar · Nir Levine · Dilan Gorur · Chris Harris · Dale Schuurmans

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #540

This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs). For MDPs that are ergodic and linear (i.e. where rewards and dynamics are linear in some known features), we provide the first finite-sample OPE error bound, extending the existing results beyond the episodic and discounted cases. In a more general setting, when the feature dynamics are approximately linear and for arbitrary rewards, we propose a new approach for estimating stationary distributions with function approximation. We formulate this problem as finding the maximum-entropy distribution subject to matching feature expectations under empirical dynamics. We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning. We demonstrate the effectiveness of the proposed OPE approaches in multiple environments.

Author Information

Nevena Lazic (DeepMind)
Dong Yin (DeepMind)
Mehrdad Farajtabar (DeepMind)
Nir Levine (DeepMind)
Dilan Gorur (DeepMind)
Chris Harris (Google)
Dale Schuurmans (Google Brain & University of Alberta)

More from the Same Authors