Skip to yearly menu bar Skip to main content


Poster

Unsupervised Learning of Disentangled Representations from Video

Emily Denton · vighnesh Birodkar

Pacific Ballroom #153

Keywords: [ None of the above ]


Abstract:

We present a new model DRNET that learns disentangled image representations from video. Our approach leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. The disentangled representation can be used for a range of tasks. For example, applying a standard LSTM to the time-vary components enables prediction of future frames. We evaluating our approach on a range of synthetic and real videos. For the latter, we demonstrate the ability to coherently generate up to several hundred steps into the future.

Live content is unavailable. Log in and register to view live content