Timezone: »

 
Poster
Regularized linear autoencoders recover the principal components, eventually
Xuchan Bao · James Lucas · Sushant Sachdeva · Roger Grosse

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #946

Our understanding of learning input-output relationships with neural nets has improved rapidly in recent years, but little is known about the convergence of the underlying representations, even in the simple case of linear autoencoders (LAEs). We show that when trained with proper regularization, LAEs can directly learn the optimal representation -- ordered, axis-aligned principal components. We analyze two such regularization schemes: non-uniform L2 regularization and a deterministic variant of nested dropout [Rippel et al, ICML' 2014]. Though both regularization schemes converge to the optimal representation, we show that this convergence is slow due to ill-conditioning that worsens with increasing latent dimension. We show that the inefficiency of learning the optimal representation is not inevitable -- we present a simple modification to the gradient descent update that greatly speeds up convergence empirically.

Author Information

Xuchan Bao (University of Toronto)
James Lucas (University of Toronto)
Sushant Sachdeva (University of Toronto)
Roger Grosse (University of Toronto)

More from the Same Authors