Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Reinforcement Learning

Hierarchical Few-Shot Imitation with Skill Transition Models

Kourosh Hakhamaneshi · Ruihan Zhao · Albert Zhan · Pieter Abbeel · Misha Laskin


Abstract:

A desirable property of autonomous agents is the ability to both solve long-horizon problems and generalize to unseen tasks. Recent advances in data-driven skill learning have shown that extracting behavioral priors from offline data can enable agents to solve challenging long-horizon tasks with reinforcement learning. However, generalization to tasks unseen during behavioral prior training remains an outstanding challenge. To this end, we present Few-shot Imitation with Skill Transition Models (FIST), an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks given a few demonstrations at test-time. FIST learns an inverse skill dynamics model and utilizes a semi-parametric approach for imitation. We show that FIST is capable of generalizing to new tasks and substantially outperforms prior baselines in navigation experiments requiring traversing unseen parts of a large maze and 7-DoF robotic arm experiments requiring manipulating previously unseen objects in a kitchen.

Chat is not available.