Timezone: »

Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning
Paul Rolland · Luca Viano · Norman Schürhoff · Boris Nikolov · Volkan Cevher

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #326

While Reinforcement Learning (RL) aims to train an agent from a reward function in a given environment, Inverse Reinforcement Learning (IRL) seeks to recover the reward function from observing an expert's behavior. It is well known that, in general, various reward functions can lead to the same optimal policy, and hence, IRL is ill-defined. However, \cite{cao2021identifiability} showed that, if we observe two or more experts with different discount factors or acting in different environments, the reward function can under certain conditions be identified up to a constant. This work starts by showing an equivalent identifiability statement from multiple experts in tabular MDPs based on a rank condition, which is easily verifiable and is shown to be also necessary. We then extend our result to various different scenarios, i.e., we characterize reward identifiability in the case where the reward function can be represented as a linear combination of given features, making it more interpretable, or when we have access to approximate transition matrices. Even when the reward is not identifiable, we provide conditions characterizing when data on multiple experts in a given environment allows to generalize and train an optimal agent in a new environment. Our theoretical results on reward identifiability and generalizability are validated in various numerical experiments.

Author Information

Paul Rolland (EPFL)
Luca Viano (EPFL)
Norman Schürhoff (University of Lausanne, Swiss Finance Institute, CEPR)
Boris Nikolov (University of Lausanne and Swiss Finance Institute)
Volkan Cevher (EPFL)

More from the Same Authors