Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning

Assistive Tele-op: Leveraging Transformers to Collect Robotic Task Demonstrations

Henry Clever · Ankur Handa · Hammad Mazhar · Qian Wan · Yashraj Narang · Maya Cakmak · Dieter Fox


Abstract:

Sharing communication of autonomous robots with input from a human operator could facilitate data collection of robotic task demonstrations to continuously improve learned models. Yet, the means to communicate intent and reason about the future are disparate between humans and robots. Recent advancements in NLP with Transformers lend both insight and specific tools to tackle this. The self-attention mechanism in Transformers aims to holistically understand a sequence of words, rather than emphasizing adjacent connections. The same holds when Transformers are applied to robotic task trajectories: given an environment state and task goal, the model can quickly update its plan with new information at every step while maintaining holistic knowledge of the past. A key insight is that human intent can be injected at any location within the time sequence if the user decides that the model predicted actions are inappropriate. At every time step, the user can (1) do nothing and allow autonomous operation to continue while observing the robot’s future plan sequence, or (2) take over and momentarily prescribe a different set of actions to nudge the model back on track and let it continue autonomously from there onwards. Virtual reality (VR) offers an ideal ground to communicate these intents on a robot, and to accumulate knowledge from human demonstrations. We develop Assistive Tele-op, a VR system that allows users to collect robot task demonstrations with both a high success rate and with greater ease than manual teleoperation systems.

Chat is not available.