Skip to yearly menu bar Skip to main content


Imitation Learning and its Challenges in Robotics

Mustafa Mukadam · Sanjiban Choudhury · Siddhartha Srinivasa

Room 516 CDE

Many animals including humans have the ability to acquire skills, knowledge, and social cues from a very young age. This ability to imitate by learning from demonstrations has inspired research across many disciplines like anthropology, neuroscience, psychology, and artificial intelligence. In AI, imitation learning (IL) serves as an essential tool for learning skills that are difficult to program by hand. The applicability of IL to robotics in particular, is useful when learning by trial and error (reinforcement learning) can be hazardous in the real world. Despite the many recent breakthroughs in IL, in the context of robotics there are several challenges to be addressed if robots are to operate freely and interact with humans in the real world.

Some important challenges include: 1) achieving good generalization and sample efficiency when the user can only provide a limited number of demonstrations with little to no feedback; 2) learning safe behaviors in human environments that require the least user intervention in terms of safety overrides without being overly conservative; and 3) leveraging data from multiple sources, including non-human sources, since limitations in hardware interfaces can often lead to poor quality demonstrations.

In this workshop, we aim to bring together researchers and experts in robotics, imitation and reinforcement learning, deep learning, and human robot interaction to
- Formalize the representations and primary challenges in IL as they pertain to robotics
- Delineate the key strengths and limitations of existing approaches with respect to these challenges
- Establish common baselines, metrics, and benchmarks, and identify open questions

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles


Log in and register to view live content