Timezone: »

 
Poster
LISA: Learning Interpretable Skill Abstractions from Language
Divyansh Garg · Skanda Vaidyanath · Kuno Kim · Jiaming Song · Stefano Ermon

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #521

Learning policies that effectively utilize language instructions in complex, multi-task environments is an important problem in imitation learning. While it is possible to condition on the entire language instruction directly, such an approach could suffer from generalization issues. To encode complex instructions into skills that can generalize to unseen instructions, we propose Learning Interpretable Skill Abstractions (LISA), a hierarchical imitation learning framework that can learn diverse, interpretable skills from language-conditioned demonstrations. LISA uses vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. In navigation and robotic manipulation environments, LISA is able to outperform a strong non-hierarchical baseline in the low data regime and compose learned skills to solve tasks containing unseen long-range instructions. Our method demonstrates a more natural way to condition on language in sequential decision-making problems and achieve interpretable and controllable behavior with the learned skills.

Author Information

Divyansh Garg (Collaborative Robotics)
Skanda Vaidyanath (Stanford University)

Master's student in CS interested in reinforcement learning, decision making, and control.

Kuno Kim (Stanford)
Jiaming Song (Stanford University)

I am a first year Ph.D. student in Stanford University. I think about problems in machine learning and deep learning under the supervision of Stefano Ermon. I did my undergrad at Tsinghua University, where I was lucky enough to collaborate with Jun Zhu and Lawrence Carin on scalable Bayesian machine learning.

Stefano Ermon (Stanford)

More from the Same Authors