Poster
Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation
Qingwen Bu · Jia Zeng · Li Chen · Yanchao Yang · Guyue Zhou · Junchi Yan · Ping Luo · Heming Cui · Yi Ma · Hongyang Li
East Exhibit Hall A-C #4111
Despite significant progress in robotics and embodied AI in recent years, deploying robots for long-horizon tasks remains a great challenge. Majority of prior arts adhere to an open-loop philosophy and lack real-time feedback, leading to error accumulation and undesirable robustness. A handful of approaches have endeavored to establish feedback mechanisms leveraging pixel-level differences or pre-trained visual representations, yet their efficacy and adaptability have been found to be constrained. Inspired by classic closed-loop control systems, we propose CLOVER, a closed-loop visuomotor control framework that incorporates feedback mechanisms to improve adaptive robotic control. CLOVER consists of a text-conditioned video diffusion model for generating visual plans as reference inputs, a measurable embedding space for accurate error quantification, and a feedback-driven controller that refines actions from feedback and initiates replans as needed. Our framework exhibits notable advancement in real-world robotic tasks and achieves state-of-the-art on CALVIN benchmark, improving by 8\% over previous open-loop counterparts.
Live content is unavailable. Log in and register to view live content