Timezone: »

Implicit Rank-Minimizing Autoencoder
Li Jing · Jure Zbontar · yann lecun

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #941

An important component of autoencoder methods is the method by which the information capacity of the latent representation is minimized or limited. In this work, the rank of the covariance matrix of the codes is implicitly minimized by relying on the fact that gradient descent learning in multi-layer linear networks leads to minimum-rank solutions. By inserting a number of extra linear layers between the encoder and the decoder, the system spontaneously learns representations with a low effective dimension. The model, dubbed Implicit Rank-Minimizing Autoencoder (IRMAE), is simple, deterministic, and learns continuous latent space. We demonstrate the validity of the method on several image generation and representation learning tasks.

Author Information

Li Jing (Facebook AI Research)

Li Jing is a postdoctoral researcher at Facebook AI Research (FAIR), working with Yann LeCun on self-supervised learning. Li is also interested in representation learning, optimization, flow-based model, energy-based model. Before joining FAIR, he obtained his PhD in physics at MIT.

Jure Zbontar (Facebook)
yann lecun (Facebook)

More from the Same Authors