Timezone: »

Palm up: Playing in the Latent Manifold for Unsupervised Pretraining
Hao Liu · Tom Zahavy · Volodymyr Mnih · Satinder Singh

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #925

Large and diverse datasets have been the cornerstones of many impressive advancements in artificial intelligence. Intelligent creatures, however, learn by interacting with the environment, which changes the input sensory signals and the state of the environment. In this work, we aim to bring the best of both worlds and propose an algorithm that exhibits an exploratory behavior whilst it utilizes large diverse datasets. Our key idea is to leverage deep generative models that are pretrained on static datasets and introduce a dynamic model in the latent space. The transition dynamics simply mixes an action and a random sampled latent. It then applies an exponential moving average for temporal persistency, the resulting latent is decoded to image using pretrained generator. We then employ an unsupervised reinforcement learning algorithm to explore in this environment and perform unsupervised representation learning on the collected data. We further leverage the temporal information of this data to pair data points as a natural supervision for representation learning. Our experiments suggest that the learned representations can be successfully transferred to downstream tasks in both vision and reinforcement learning domains.

Author Information

Hao Liu (University of California Berkeley)
Tom Zahavy (DeepMind)
Volodymyr Mnih (DeepMind)
Satinder Singh (DeepMind)

More from the Same Authors