Skip to yearly menu bar Skip to main content


Poster

Efficient Risk-Averse Reinforcement Learning

Ido Greenberg · Yinlam Chow · Mohammad Ghavamzadeh · Shie Mannor

Hall J (level 1) #411

Keywords: [ blindness to success ] [ CVaR ] [ coherent risk measures ] [ cross entropy method ] [ CEM ] [ Reinforcement Learning ] [ Safe RL ] [ risk sensitive RL ] [ risk averse RL ] [ sample efficient RL ]


Abstract:

In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns. A risk measure often focuses on the worst returns out of the agent's experience. As a result, standard methods for risk-averse RL often ignore high-return strategies. We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a mechanism we call soft risk to bypass it. We also devise a novel cross entropy module for sampling, which (1) preserves risk aversion despite the soft risk; (2) independently improves sample efficiency. By separating the risk aversion of the sampler and the optimizer, we can sample episodes with poor conditions, yet optimize with respect to successful strategies. We combine these two concepts in CeSoR - Cross-entropy Soft-Risk optimization algorithm - which can be applied on top of any risk-averse policy gradient (PG) method. We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks, including in scenarios where standard risk-averse PG completely fails.

Chat is not available.