Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe and Robust Control of Uncertain Systems

State Augmented Constrained Reinforcement Learning: Overcoming the Limitations of Learning with Rewards

Miguel Calvo-Fullana · Santiago Paternain · Alejandro Ribeiro


Abstract:

Constrained reinforcement learning involves multiple rewards that must individually accumulate to given thresholds. In this class of problems, we show a simple example in which the desired optimal policy cannot be induced by any linear combination of rewards. Hence, there exist constrained reinforcement learning problems for which neither regularized nor classical primal-dual methods yield optimal policies. This work addresses this shortcoming by augmenting the state with Lagrange multipliers and reinterpreting primal-dual methods as the portion of the dynamics that drives the multipliers evolution. This approach provides a systematic state augmentation procedure that is guaranteed to solve reinforcement learning problems with constraints. Thus, while primal-dual methods can fail at finding optimal policies, running the dual dynamics while executing the augmented policy yields an algorithm that provably samples actions from the optimal policy.