Learning Deep Parsimonious Representations
Renjie Liao · Alex Schwing · Richard Zemel · Raquel Urtasun
2016 Poster
Abstract
In this paper we aim at facilitating generalization for deep networks while supporting interpretability of the learned representations. Towards this goal, we propose a clustering based regularization that encourages parsimonious representations. Our k-means style objective is easy to optimize and flexible supporting various forms of clustering, including sample and spatial clustering as well as co-clustering. We demonstrate the effectiveness of our approach on the tasks of unsupervised learning, classification, fine grained categorization and zero-shot learning.
Chat is not available.
Successful Page Load