Skip to yearly menu bar Skip to main content


Workshop

Model Uncertainty and Risk in Reinforcement Learning

Yaakov Engel · Mohammad Ghavamzadeh · Shie Mannor · Pascal Poupart

Westin: Callaghan

Sat 13 Dec, 7:30 a.m. PST

Reinforcement Learning (RL) problems are typically formulated in terms of Stochastic Decision Processes (SDPs), or a specialization thereof, Markovian Decision Processes (MDPs), with the goal of identifying an optimal control policy. In contrast to planning problems, RL problems are characterized by the lack of complete information concerning the transition and reward models of the SDP. Hence, algorithms for solving RL problems need to estimate properties of the system from finite data. Naturally, any such estimated quantity has inherent uncertainty. One of the interesting and challenging aspects of RL is that the algorithms have partial control over the data sample they observe, allowing them to actively control the amount of this uncertainty, and potentially trade it off against performance. Reinforcement Learning as a field of research, has over the past few years seen renewed interest in methods that explicitly consider the uncertainties inherent to the learning process. Indeed, interest in data-driven models that take uncertainties into account, goes beyond RL to the fields of Control Theory, Operations Research and Statistics. Within the RL community, relevant lines of research include Bayesian RL, risk sensitive and robust dynamic decision making, RL with confidence intervals and applications of risk-aware and uncertainty-aware decision-making. The goal of the workshop is to bring together researchers in RL and related fields that work on issues related to risk and model uncertainty, stimulate interactions and discuss directions for future work.

Live content is unavailable. Log in and register to view live content