Skip to yearly menu bar Skip to main content


Poster

Bayesian Risk Markov Decision Processes

Yifan Lin · Yuxuan Ren · Enlu Zhou

Hall J (level 1) #725

Keywords: [ parameter uncertainty ] [ risk averse ] [ approximate dynamic programming ] [ Reinforcement Learning ] [ Markov Decision Processes ]


Abstract:

We consider finite-horizon Markov Decision Processes where parameters, such as transition probabilities, are unknown and estimated from data. The popular distributionally robust approach to addressing the parameter uncertainty can sometimes be overly conservative. In this paper, we propose a new formulation, Bayesian risk Markov decision process (BR-MDP), to address parameter uncertainty in MDPs, where a risk functional is applied in nested form to the expected total cost with respect to the Bayesian posterior distributions of the unknown parameters. The proposed formulation provides more flexible risk attitudes towards parameter uncertainty and takes into account the availability of data in future time stages. To solve the proposed formulation with the conditional value-at-risk (CVaR) risk functional, we propose an efficient approximation algorithm by deriving an analytical approximation of the value function and utilizing the convexity of CVaR. We demonstrate the empirical performance of the BR-MDP formulation and proposed algorithms on a gambler’s betting problem and an inventory control problem.

Chat is not available.