Timezone: »

 
Poster
Off-Policy Imitation Learning from Observations
Zhuangdi Zhu · Kaixiang Lin · Bo Dai · Jiayu Zhou

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1373

Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit through the reuse of incomplete resources. Compared to conventional imitation learning (IL), LfO is more challenging because of the lack of expert action guidance. In both conventional IL and LfO, distribution matching is at the heart of their foundation. Traditional distribution matching approaches are sample-costly which depend on on-policy transitions for policy learning. Towards sample-efficiency, some off-policy solutions have been proposed, which, however, either lack comprehensive theoretical justifications or depend on the guidance of expert actions. In this work, we propose a sample-efficient LfO approach which enables off-policy optimization in a principled manner. To further accelerate the learning procedure, we regulate the policy update with an inverse action model, which assists distribution matching from the perspective of mode-covering. Extensive empirical results on challenging locomotion tasks indicate that our approach is comparable with state-of-the-art in terms of both sample-efficiency and asymptotic performance.

Author Information

Zhuangdi Zhu (Michigan State University)
Kaixiang Lin (Michigan State University)
Bo Dai (Google Brain)
Jiayu Zhou (Michigan State University)

More from the Same Authors