`

Timezone: »

 
Bayesian Inverse Constrained Reinforcement Learning
Dimitris Papadimitriou · Daniel Brown · Usman Anwar

We consider the problem of inferring constraints from demonstrations from a Bayesian perspective. We propose Bayesian Inverse Constraint Reinforcement Learning (BICRL), a novel approach that infers a probability distribution over constraints from demonstrated trajectories. The main advantages of BICRL, compared to prior constraint inference algorithms, are (1) the freedom to infer constraints from partial trajectories and even from disjoint state-action pairs, (2) the ability to learn constraints from suboptimal demonstrations and to learn constraints in stochastic environments, and (3) the opportunity to estimate a posterior distribution over constraints that enables active learning and robust policy optimization.

Author Information

Dimitris Papadimitriou (UCBerkeley)
Daniel Brown (UC Berkeley)
Usman Anwar (Information Technology University, Lahore.)

More from the Same Authors