Skip to yearly menu bar Skip to main content


Poster

Safe Exploration in Finite Markov Decision Processes with Gaussian Processes

Matteo Turchetta · Felix Berkenkamp · Andreas Krause

Area 5+6+7+8 #86

Keywords: [ Reinforcement Learning Algorithms ] [ Gaussian Processes ] [ (Other) Robotics and Control ] [ Active Learning ]


Abstract:

In classical reinforcement learning agents accept arbitrary short term loss for long term gain when exploring their environment. This is infeasible for safety critical applications such as robotics, where even a single unsafe action may cause system failure or harm the environment. In this paper, we address the problem of safely exploring finite Markov decision processes (MDP). We define safety in terms of an a priori unknown safety constraint that depends on states and actions and satisfies certain regularity conditions expressed via a Gaussian process prior. We develop a novel algorithm, SAFEMDP, for this task and prove that it completely explores the safely reachable part of the MDP without violating the safety constraint. To achieve this, it cautiously explores safe states and actions in order to gain statistical confidence about the safety of unvisited state-action pairs from noisy observations collected while navigating the environment. Moreover, the algorithm explicitly considers reachability when exploring the MDP, ensuring that it does not get stuck in any state with no safe way out. We demonstrate our method on digital terrain models for the task of exploring an unknown map with a rover.

Live content is unavailable. Log in and register to view live content