Timezone: »

 
VIMA: General Robot Manipulation with Multimodal Prompts
Yunfan Jiang · Agrim Gupta · Zichen Zhang · Guanzhi Wang · Yongqiang Dou · Yanjun Chen · Fei-Fei Li · Anima Anandkumar · Yuke Zhu · Linxi Fan
Event URL: https://openreview.net/forum?id=oU2DzdTI94 »

Prompt-based learning has emerged as a successful paradigm in natural language processing, where a single general-purpose language model can be instructed to perform any task specified by input prompts. Yet task specification in robotics comes in various forms, such as imitating one-shot demonstrations, following language instructions, and reaching visual goals. They are often considered different tasks and tackled by specialized models. This work shows that we can express a wide spectrum of robot manipulation tasks with multimodal prompts, interleaving textual and visual tokens. We design a transformer-based generalist robot agent, VIMA, that processes these prompts and outputs motor actions autoregressively. To train and evaluate VIMA, we develop a new simulation benchmark with thousands of procedurally-generated tabletop tasks with multimodal prompts, 600K+ expert trajectories for imitation learning, and four levels of evaluation protocol for systematic generalization. VIMA achieves strong scalability in both model capacity and data size. It outperforms prior SOTA methods in the hardest zero-shot generalization setting by up to 2.9x task success rate given the same training data. With 10x less training data, VIMA still performs 2.7x better than the top competing approach. Video demos are available at https://iclr3081.github.io/.

Author Information

Yunfan Jiang (Stanford University)
Agrim Gupta (Stanford University)
Zichen Zhang (Macalester College)
Zichen Zhang

An Art and Sports Aficionado who is unveiling the beauty of Mathematics and Computer Science behind the Artificial Intelligence

Guanzhi Wang (Caltech)
Yongqiang Dou (Tsinghua University)

Turning ideas into action.

Yanjun Chen (Stanford University)
Fei-Fei Li (Princeton University)
Anima Anandkumar (NVIDIA / Caltech)
Yuke Zhu (University of Texas - Austin)
Linxi Fan (NVIDIA, Stanford)

More from the Same Authors