Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe and Robust Control of Uncertain Systems

Bayesian Inverse Constrained Reinforcement Learning

Dimitris Papadimitriou · Daniel Brown · Usman Anwar


Abstract:

We consider the problem of inferring constraints from demonstrations from a Bayesian perspective. We propose Bayesian Inverse Constraint Reinforcement Learning (BICRL), a novel approach that infers a probability distribution over constraints from demonstrated trajectories. The main advantages of BICRL, compared to prior constraint inference algorithms, are (1) the freedom to infer constraints from partial trajectories and even from disjoint state-action pairs, (2) the ability to learn constraints from suboptimal demonstrations and to learn constraints in stochastic environments, and (3) the opportunity to estimate a posterior distribution over constraints that enables active learning and robust policy optimization.