Timezone: »
A Budgeted Markov Decision Process (BMDP) is an extension of a Markov Decision Process to critical applications requiring safety constraints. It relies on a notion of risk implemented in the shape of an upper bound on a constrains violation signal that -- importantly -- can be modified in real-time. So far, BMDPs could only be solved in the case of finite state spaces with known dynamics. This work extends the state-of-the-art to continuous spaces environments and unknown dynamics. We show that the solution to a BMDP is the fixed point of a novel Budgeted Bellman Optimality operator. This observation allows us to introduce natural extensions of Deep Reinforcement Learning algorithms to address large-scale BMDPs. We validate our approach on two simulated applications: spoken dialogue and autonomous driving.
Author Information
Nicolas Carrara (ULille)
Edouard Leurent (INRIA)
PhD student in Reinforcement Learning, at: - INRIA SequeL project for sequential learning - INRIA Non-A project for finite-time control - Renault Group
Romain Laroche (Microsoft Research)
Tanguy Urvoy (Orange-Labs)
Odalric-Ambrym Maillard (INRIA)
Olivier Pietquin (Google Research Brain Team)
More from the Same Authors
-
2020 Poster: Munchausen Reinforcement Learning »
Nino Vieillard · Olivier Pietquin · Matthieu Geist -
2020 Poster: Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs »
Edouard Leurent · Odalric-Ambrym Maillard · Denis Efimov -
2020 Poster: Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning »
Nino Vieillard · Tadashi Kozuno · Bruno Scherrer · Olivier Pietquin · Remi Munos · Matthieu Geist -
2020 Oral: Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning »
Nino Vieillard · Tadashi Kozuno · Bruno Scherrer · Olivier Pietquin · Remi Munos · Matthieu Geist -
2020 Oral: Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs »
Edouard Leurent · Odalric-Ambrym Maillard · Denis Efimov -
2020 Poster: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2020 Spotlight: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2020 Poster: Planning in Markov Decision Processes with Gap-Dependent Sample Complexity »
Anders Jonsson · Emilie Kaufmann · Pierre Menard · Omar Darwiche Domingues · Edouard Leurent · Michal Valko -
2020 Poster: Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications »
Sarah Perrin · Julien Perolat · Mathieu Lauriere · Matthieu Geist · Romuald Elie · Olivier Pietquin -
2019 Poster: Learning Multiple Markov Chains via Adaptive Allocation »
Mohammad Sadegh Talebi · Odalric-Ambrym Maillard -
2019 Poster: Regret Bounds for Learning State Representations in Reinforcement Learning »
Ronald Ortner · Matteo Pirotta · Alessandro Lazaric · Ronan Fruit · Odalric-Ambrym Maillard -
2018 Workshop: Visually grounded interaction and language »
Florian Strub · Harm de Vries · Erik Wijmans · Samyak Datta · Ethan Perez · Mateusz Malinowski · Stefan Lee · Peter Anderson · Aaron Courville · Jeremie MARY · Dhruv Batra · Devi Parikh · Olivier Pietquin · Chiori HORI · Tim Marks · Anoop Cherian -
2017 Workshop: Visually grounded interaction and language »
Florian Strub · Harm de Vries · Abhishek Das · Satwik Kottur · Stefan Lee · Mateusz Malinowski · Olivier Pietquin · Devi Parikh · Dhruv Batra · Aaron Courville · Jeremie Mary -
2017 Poster: Is the Bellman residual a bad proxy? »
Matthieu Geist · Bilal Piot · Olivier Pietquin -
2017 Poster: Modulating early visual processing by language »
Harm de Vries · Florian Strub · Jeremie Mary · Hugo Larochelle · Olivier Pietquin · Aaron Courville -
2017 Spotlight: Modulating early visual processing by language »
Harm de Vries · Florian Strub · Jeremie Mary · Hugo Larochelle · Olivier Pietquin · Aaron Courville -
2014 Workshop: From Bad Models to Good Policies (Sequential Decision Making under Uncertainty) »
Odalric-Ambrym Maillard · Timothy A Mann · Shie Mannor · Jeremie Mary · Laurent Orseau · Thomas Dietterich · Ronald Ortner · Peter Grünwald · Joelle Pineau · Raphael Fonteneau · Georgios Theocharous · Esteban D Arcaute · Christos Dimitrakakis · Nan Jiang · Doina Precup · Pierre-Luc Bacon · Marek Petrik · Aviv Tamar -
2014 Poster: "How hard is my MDP?" The distribution-norm to the rescue »
Odalric-Ambrym Maillard · Timothy A Mann · Shie Mannor -
2014 Poster: Difference of Convex Functions Programming for Reinforcement Learning »
Bilal Piot · Matthieu Geist · Olivier Pietquin -
2014 Spotlight: Difference of Convex Functions Programming for Reinforcement Learning »
Bilal Piot · Matthieu Geist · Olivier Pietquin -
2014 Oral: "How hard is my MDP?" The distribution-norm to the rescue »
Odalric-Ambrym Maillard · Timothy A Mann · Shie Mannor -
2012 Poster: Inverse Reinforcement Learning through Structured Classification »
Edouard Klein · Matthieu Geist · BILAL PIOT · Olivier Pietquin -
2012 Poster: Online allocation and homogeneous partitioning for piecewise constant mean-approximation »
Alexandra Carpentier · Odalric-Ambrym Maillard -
2012 Poster: Hierarchical Optimistic Region Selection driven by Curiosity »
Odalric-Ambrym Maillard -
2011 Poster: Selecting the State-Representation in Reinforcement Learning »
Odalric-Ambrym Maillard · Remi Munos · Daniil Ryabko -
2011 Poster: Sparse Recovery with Brownian Sensing »
Alexandra Carpentier · Odalric-Ambrym Maillard · Remi Munos -
2010 Spotlight: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: LSTD with Random Projections »
Mohammad Ghavamzadeh · Alessandro Lazaric · Odalric-Ambrym Maillard · Remi Munos -
2010 Poster: Scrambled Objects for Least-Squares Regression »
Odalric-Ambrym Maillard · Remi Munos -
2009 Poster: Compressed Least-Squares Regression »
Odalric-Ambrym Maillard · Remi Munos