Causal Imitation Learning With Unobserved Confounders
Junzhe Zhang, Daniel Kumor, Elias Bareinboim
Oral presentation: Orals & Spotlights Track 19: Probabilistic/Causality
on 2020-12-09T06:15:00-08:00 - 2020-12-09T06:30:00-08:00
on 2020-12-09T06:15:00-08:00 - 2020-12-09T06:30:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: One of the common ways children learn is by mimicking adults. Imitation learning focuses on learning policies with suitable performance from demonstrations generated by an expert, with an unspecified performance measure, and unobserved reward signal. Popular methods for imitation learning start by either directly mimicking the behavior policy of an expert (behavior cloning) or by learning a reward function that prioritizes observed expert trajectories (inverse reinforcement learning). However, these methods rely on the assumption that covariates used by the expert to determine her/his actions are fully observed. In this paper, we relax this assumption and study imitation learning when sensory inputs of the learner and the expert differ. First, we provide a non-parametric, graphical criterion that is complete (both necessary and sufficient) for determining the feasibility of imitation from the combinations of demonstration data and qualitative assumptions about the underlying environment, represented in the form of a causal model. We then show that when such a criterion does not hold, imitation could still be feasible by exploiting quantitative knowledge of the expert trajectories. Finally, we develop an efficient procedure for learning the imitating policy from experts' trajectories.