Skip to yearly menu bar Skip to main content


Poster

Beyond Confidence Regions: Tight Bayesian Ambiguity Sets for Robust MDPs

Marek Petrik · Reazul Hasan Russel

East Exhibition Hall B, C #185

Keywords: [ Model-Based RL ] [ Reinforcement Learning and Planning ] [ Reinforcement Learning ] [ Reinforcement Learning and Planning -> Decision and Control; Reinforcement Learning and Planning ]


Abstract:

Robust MDPs (RMDPs) can be used to compute policies with provable worst-case guarantees in reinforcement learning. The quality and robustness of an RMDP solution are determined by the ambiguity set---the set of plausible transition probabilities---which is usually constructed as a multi-dimensional confidence region. Existing methods construct ambiguity sets as confidence regions using concentration inequalities which leads to overly conservative solutions. This paper proposes a new paradigm that can achieve better solutions with the same robustness guarantees without using confidence regions as ambiguity sets. To incorporate prior knowledge, our algorithms optimize the size and position of ambiguity sets using Bayesian inference. Our theoretical analysis shows the safety of the proposed method, and the empirical results demonstrate its practical promise.

Live content is unavailable. Log in and register to view live content