Timezone: »
The goal of robust reinforcement learning (RL) is to learn a policy that is robust against the uncertainty in model parameters. Parameter uncertainty commonly occurs in many real-world RL applications due to simulator modeling errors, changes in the real-world system dynamics over time, and adversarial disturbances. Robust RL is typically formulated as a max-min problem, where the objective is to learn the policy that maximizes the value against the worst possible models that lie in an uncertainty set. In this work, we propose a robust RL algorithm called Robust Fitted Q-Iteration (RFQI), which uses only an offline dataset to learn the optimal robust policy. Robust RL with offline data is significantly more challenging than its non-robust counterpart because of the minimization over all models present in the robust Bellman operator. This poses challenges in offline data collection, optimization over the models, and unbiased estimation. In this work, we propose a systematic approach to overcome these challenges, resulting in our RFQI algorithm. We prove that RFQI learns a near-optimal robust policy under standard assumptions and demonstrate its superior performance on standard benchmark problems.
Author Information
Kishan Panaganti (TAMU)
I am a fifth-year Electrical and Computer Engineering PhD student at Texas A&M University, where I have the privilege of being advised by Prof. Dileep Kalathil. I work on reinforcement learning algorithms that span several areas like optimization, high-dimensional probability, multi-armed bandits, and stochastic theory (this is a non-exhaustive list). My near-future plans will be to contribute theoretical guarantees for the algorithms under safe and robust regimes of reinforcement learning. I am also open for post-doc job market starting Summer 2023!
Zaiyan Xu (Texas A&M University)
Dileep Kalathil (Texas A&M University)
Mohammad Ghavamzadeh (Google Research)
More from the Same Authors
-
2022 : A Mixture-of-Expert Approach to RL-based Dialogue Management »
Yinlam Chow · Azamat Tulepbergenov · Ofir Nachum · Dhawal Gupta · Moonkyung Ryu · Mohammad Ghavamzadeh · Craig Boutilier -
2022 Poster: Private and Communication-Efficient Algorithms for Entropy Estimation »
Gecia Bravo-Hermsdorff · RĂ³bert Busa-Fekete · Mohammad Ghavamzadeh · Andres Munoz Medina · Umar Syed -
2022 Poster: DOPE: Doubly Optimistic and Pessimistic Exploration for Safe Reinforcement Learning »
Archana Bura · Aria HasanzadeZonuzy · Dileep Kalathil · Srinivas Shakkottai · Jean-Francois Chamberland -
2022 Poster: Enhanced Meta Reinforcement Learning via Demonstrations in Sparse Reward Environments »
Desik Rengarajan · Sapana Chaudhary · Jaewon Kim · Dileep Kalathil · Srinivas Shakkottai -
2022 Poster: Anchor-Changing Regularized Natural Policy Gradient for Multi-Objective Reinforcement Learning »
Ruida Zhou · Tao Liu · Dileep Kalathil · P. R. Kumar · Chao Tian -
2022 Poster: Operator Splitting Value Iteration »
Amin Rakhsha · Andrew Wang · Mohammad Ghavamzadeh · Amir-massoud Farahmand -
2022 Poster: Efficient Risk-Averse Reinforcement Learning »
Ido Greenberg · Yinlam Chow · Mohammad Ghavamzadeh · Shie Mannor -
2021 Poster: Learning Policies with Zero or Bounded Constraint Violation for Constrained MDPs »
Tao Liu · Ruida Zhou · Dileep Kalathil · P. R. Kumar · Chao Tian -
2021 Poster: Adaptive Sampling for Minimax Fair Classification »
Shubhanshu Shekhar · Greg Fields · Mohammad Ghavamzadeh · Tara Javidi -
2020 Poster: Online Planning with Lookahead Policies »
Yonathan Efroni · Mohammad Ghavamzadeh · Shie Mannor -
2020 Session: Orals & Spotlights Track 09: Reinforcement Learning »
Pulkit Agrawal · Mohammad Ghavamzadeh