Poster

Iso-Dream: Isolating Noncontrollable Visual Dynamics in World Models

Minting Pan · Xiangming Zhu · Yunbo Wang · Xiaokang Yang

Keywords: [ Visual dynamics ] [ World model ] [ Reinforcement Learning ]

[ Abstract ]
[ Poster [ OpenReview
 
Spotlight presentation: Lightning Talks 5A-3
Thu 8 Dec 10 a.m. PST — 10:15 a.m. PST

Abstract:

World models learn the consequences of actions in vision-based interactive systems. However, in practical scenarios such as autonomous driving, there commonly exists noncontrollable dynamics independent of the action signals, making it difficult to learn effective world models. Naturally, therefore, we need to enable the world models to decouple the controllable and noncontrollable dynamics from the entangled spatiotemporal data. To this end, we present a reinforcement learning approach named Iso-Dream, which expands the Dream-to-Control framework in two aspects. First, the world model contains a three-branch neural architecture. By solving the inverse dynamics problem, it learns to factorize latent representations according to the responses to action signals. Second, in the process of behavior learning, we estimate the state values by rolling-out a sequence of noncontrollable states (less related to the actions) into the future and associate the current controllable state with them. In this way, the isolation of mixed dynamics can greatly facilitate long-horizon decision-making tasks in realistic scenes, such as avoiding potential future risks by predicting the movement of other vehicles in autonomous driving. Experiments show that Iso-Dream is effective in decoupling the mixed dynamics and remarkably outperforms existing approaches in a wide range of visual control and prediction domains.

Chat is not available.