Timezone: »
Poster
Winning the Lottery with Continuous Sparsification
Pedro Savarese · Hugo Silva · Michael Maire
The search for efficient, sparse deep neural network models is most prominently performed by pruning: training a dense, overparameterized network and removing parameters, usually via following a manually-crafted heuristic. Additionally, the recent Lottery Ticket Hypothesis conjectures that, for a typically-sized neural network, it is possible to find small sub-networks which, when trained from scratch on a comparable budget, match the performance of the original dense counterpart. We revisit fundamental aspects of pruning algorithms, pointing out missing ingredients in previous approaches, and develop a method, Continuous Sparsification, which searches for sparse networks based on a novel approximation of an intractable $\ell_0$ regularization. We compare against dominant heuristic-based methods on pruning as well as ticket search -- finding sparse subnetworks that can be successfully re-trained from an early iterate. Empirical results show that we surpass the state-of-the-art for both objectives, across models and datasets, including VGG trained on CIFAR-10 and ResNet-50 trained on ImageNet. In addition to setting a new standard for pruning, Continuous Sparsification also offers fast parallel ticket search, opening doors to new applications of the Lottery Ticket Hypothesis.
Author Information
Pedro Savarese (TTIC)
Hugo Silva (University of Alberta)
Michael Maire (University of Chicago)
More from the Same Authors
-
2022 : On Convexity and Linear Mode Connectivity in Neural Networks »
David Yunis · Kumar Kshitij Patel · Pedro Savarese · Gal Vardi · Jonathan Frankle · Matthew Walter · Karen Livescu · Michael Maire -
2023 Poster: Accelerated Training via Incrementally Growing Neural Networks using Variance Transfer and Learning Rate Adaptation »
Xin Yuan · Pedro Savarese · Michael Maire -
2022 Poster: Not All Bits have Equal Value: Heterogeneous Precisions via Trainable Noise »
Pedro Savarese · Xin Yuan · Yanjing Li · Michael Maire -
2021 Poster: Online Meta-Learning via Learning with Layer-Distributed Memory »
Sudarshan Babu · Pedro Savarese · Michael Maire -
2020 Poster: Self-Supervised Visual Representation Learning from Hierarchical Grouping »
Xiao Zhang · Michael Maire -
2020 Spotlight: Self-Supervised Visual Representation Learning from Hierarchical Grouping »
Xiao Zhang · Michael Maire