Timezone: »

 
Poster
Supervised autoencoders: Improving generalization performance with unsupervised regularizers
Lei Le · Andrew Patterson · Martha White

Thu Dec 06 02:00 PM -- 04:00 PM (PST) @ Room 517 AB #115

Generalization performance is a central goal in machine learning, particularly when learning representations with large neural networks. A common strategy to improve generalization has been through the use of regularizers, typically as a norm constraining the parameters. Regularizing hidden layers in a neural network architecture, however, is not straightforward. There have been a few effective layer-wise suggestions, but without theoretical guarantees for improved performance. In this work, we theoretically and empirically analyze one such model, called a supervised auto-encoder: a neural network that predicts both inputs (reconstruction error) and targets jointly. We provide a novel generalization result for linear auto-encoders, proving uniform stability based on the inclusion of the reconstruction error---particularly as an improvement on simplistic regularization such as norms or even on more advanced regularizations such as the use of auxiliary tasks. Empirically, we then demonstrate that, across an array of architectures with a different number of hidden units and activation functions, the supervised auto-encoder compared to the corresponding standard neural network never harms performance and can significantly improve generalization.

Author Information

Lei Le (Indiana University Bloomington)
Andrew Patterson (University of Alberta)
Martha White (University of Alberta)

More from the Same Authors