Timezone: »
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods with ResNet-50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet.
Author Information
Shiwei Liu (Netherlands)
I am a third-year Ph.D. student in the Data Mining Group, Department of Mathematics and Computer Science, Eindhoven University of Technology (TU/e). My current research topics are dynamic sparse training, sparse neural networks, pruning, the generalization of neural networks, etc. I am looking for a postdoc position in machine learning.
Tianlong Chen (Unversity of Texas at Austin)
Xiaohan Chen (The University of Texas at Austin)
Zahra Atashgahi (University of Twente)
Lu Yin (Eindhoven University of Technology)
Huanyu Kou (University of Leeds)
Li Shen (Tencent AI Lab)
Mykola Pechenizkiy (TU Eindhoven)
Zhangyang Wang (UT Austin)
Decebal Constantin Mocanu (Eindhoven University of Technology)
More from the Same Authors
-
2022 Poster: Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach »
Peng Mi · Li Shen · Tianhe Ren · Yiyi Zhou · Xiaoshuai Sun · Rongrong Ji · Dacheng Tao -
2022 : HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing »
Tianlong Chen · Chengyue Gong · Daniel Diaz · Xuxi Chen · Jordan Wells · Qiang Liu · Zhangyang Wang · Andrew Ellington · Alex Dimakis · Adam Klivans -
2022 : An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning »
Danil Provodin · Pratik Gajane · Mykola Pechenizkiy · Maurits Kaptein -
2022 : An Empirical Evaluation of Posterior Sampling for Constrained Reinforcement Learning »
Danil Provodin · Pratik Gajane · Mykola Pechenizkiy · Maurits Kaptein -
2022 Spotlight: Sparse Winning Tickets are Data-Efficient Image Recognizers »
Mukund Varma T · Xuxi Chen · Zhenyu Zhang · Tianlong Chen · Subhashini Venugopalan · Zhangyang Wang -
2022 Poster: Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets »
Ruisi Cai · Zhenyu Zhang · Tianlong Chen · Xiaohan Chen · Zhangyang Wang -
2022 Poster: Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative »
Tianxin Wei · Yuning You · Tianlong Chen · Yang Shen · Jingrui He · Zhangyang Wang -
2022 Poster: Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation »
Zeyu Qin · Yanbo Fan · Yi Liu · Li Shen · Yong Zhang · Jue Wang · Baoyuan Wu -
2022 Poster: MissDAG: Causal Discovery in the Presence of Missing Data with Continuous Additive Noise Models »
Erdun Gao · Ignavier Ng · Mingming Gong · Li Shen · Wei Huang · Tongliang Liu · Kun Zhang · Howard Bondell -
2022 Poster: Dynamic Sparse Network for Time Series Classification: Learning What to “See” »
Qiao Xiao · Boqian Wu · Yu Zhang · Shiwei Liu · Mykola Pechenizkiy · Elena Mocanu · Decebal Constantin Mocanu -
2022 Poster: Sparse Winning Tickets are Data-Efficient Image Recognizers »
Mukund Varma T · Xuxi Chen · Zhenyu Zhang · Tianlong Chen · Subhashini Venugopalan · Zhangyang Wang -
2022 Poster: Where to Pay Attention in Sparse Training for Feature Selection? »
Ghada Sokar · Zahra Atashgahi · Mykola Pechenizkiy · Decebal Constantin Mocanu -
2022 Poster: M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design »
hanxue liang · Zhiwen Fan · Rishov Sarkar · Ziyu Jiang · Tianlong Chen · Kai Zou · Yu Cheng · Cong Hao · Zhangyang Wang -
2022 Poster: Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again »
AJAY JAISWAL · Peihao Wang · Tianlong Chen · Justin Rousseau · Ying Ding · Zhangyang Wang -
2022 Poster: Advancing Model Pruning via Bi-level Optimization »
Yihua Zhang · Yuguang Yao · Parikshit Ram · Pu Zhao · Tianlong Chen · Mingyi Hong · Yanzhi Wang · Sijia Liu -
2022 Poster: A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking »
Keyu Duan · Zirui Liu · Peihao Wang · Wenqing Zheng · Kaixiong Zhou · Tianlong Chen · Xia Hu · Zhangyang Wang -
2021 : The Impact of Batch Learning in Stochastic Bandits »
Danil Provodin · Pratik Gajane · Mykola Pechenizkiy · Maurits Kaptein -
2021 Poster: Improving Contrastive Learning on Imbalanced Data via Open-World Sampling »
Ziyu Jiang · Tianlong Chen · Ting Chen · Zhangyang Wang -
2021 Poster: Stronger NAS with Weaker Predictors »
Junru Wu · Xiyang Dai · Dongdong Chen · Yinpeng Chen · Mengchen Liu · Ye Yu · Zhangyang Wang · Zicheng Liu · Mei Chen · Lu Yuan -
2021 Poster: IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers »
Bowen Pan · Rameswar Panda · Yifan Jiang · Zhangyang Wang · Rogerio Feris · Aude Oliva -
2021 Poster: Hyperparameter Tuning is All You Need for LISTA »
Xiaohan Chen · Jialin Liu · Zhangyang Wang · Wotao Yin -
2021 Poster: Chasing Sparsity in Vision Transformers: An End-to-End Exploration »
Tianlong Chen · Yu Cheng · Zhe Gan · Lu Yuan · Lei Zhang · Zhangyang Wang -
2021 Poster: Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective »
Tianlong Chen · Yu Cheng · Zhe Gan · Jingjing Liu · Zhangyang Wang -
2021 Poster: TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up »
Yifan Jiang · Shiyu Chang · Zhangyang Wang -
2021 Poster: AugMax: Adversarial Composition of Random Augmentations for Robust Training »
Haotao Wang · Chaowei Xiao · Jean Kossaifi · Zhiding Yu · Anima Anandkumar · Zhangyang Wang -
2021 Poster: Delayed Propagation Transformer: A Universal Computation Engine towards Practical Control in Cyber-Physical Systems »
Wenqing Zheng · Qiangqiang Guo · Hao Yang · Peihao Wang · Zhangyang Wang -
2021 Poster: The Elastic Lottery Ticket Hypothesis »
Xiaohan Chen · Yu Cheng · Shuohang Wang · Zhe Gan · Jingjing Liu · Zhangyang Wang -
2021 Poster: Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? »
Xiaolong Ma · Geng Yuan · Xuan Shen · Tianlong Chen · Xuxi Chen · Xiaohan Chen · Ning Liu · Minghai Qin · Sijia Liu · Zhangyang Wang · Yanzhi Wang -
2021 Poster: You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership »
Xuxi Chen · Tianlong Chen · Zhenyu Zhang · Zhangyang Wang -
2020 Poster: Graph Contrastive Learning with Augmentations »
Yuning You · Tianlong Chen · Yongduo Sui · Ting Chen · Zhangyang Wang · Yang Shen -
2020 Poster: Robust Pre-Training by Adversarial Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Ting Chen · Zhangyang Wang -
2020 Poster: Training Stronger Baselines for Learning to Optimize »
Tianlong Chen · Weiyi Zhang · Zhou Jingyang · Shiyu Chang · Sijia Liu · Lisa Amini · Zhangyang Wang -
2020 Spotlight: Training Stronger Baselines for Learning to Optimize »
Tianlong Chen · Weiyi Zhang · Zhou Jingyang · Shiyu Chang · Sijia Liu · Lisa Amini · Zhangyang Wang -
2020 Poster: Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free »
Haotao Wang · Tianlong Chen · Shupeng Gui · TingKuei Hu · Ji Liu · Zhangyang Wang -
2020 Poster: The Lottery Ticket Hypothesis for Pre-trained BERT Networks »
Tianlong Chen · Jonathan Frankle · Shiyu Chang · Sijia Liu · Yang Zhang · Zhangyang Wang · Michael Carbin -
2019 : Poster Session »
Nathalie Baracaldo · Seth Neel · Tuyen Le · Dan Philps · Suheng Tao · Sotirios Chatzis · Toyo Suzumura · Wei Wang · WENHANG BAO · Solon Barocas · Manish Raghavan · Samuel Maina · Reginald Bryant · Kush Varshney · Skyler D. Speakman · Navdeep Gill · Nicholas Schmidt · Kevin Compher · Naveen Sundar Govindarajulu · Vivek Sharma · Praneeth Vepakomma · Tristan Swedish · Jayashree Kalpathy-Cramer · Ramesh Raskar · Shihao Zheng · Mykola Pechenizkiy · Marco Schreyer · Li Ling · Chirag Nagpal · Robert Tillman · Manuela Veloso · Hanjie Chen · Xintong Wang · Michael Wellman · Matthew van Adelsberg · Ben Wood · Hans Buehler · Mahmoud Mahfouz · Antonios Alexos · Megan Shearer · Antigoni Polychroniadou · Lucia Larise Stavarache · Dmitry Efimov · Johnston P Hall · Yukun Zhang · Emily Diana · Sumitra Ganesh · Vineeth Ravi · · Swetasudha Panda · Xavier Renard · Matthew Jagielski · Yonadav Shavit · Joshua Williams · Haoran Wei · Shuang (Sophie) Zhai · Xinyi Li · Hongda Shen · Daiki Matsunaga · Jaesik Choi · Alexis Laignelet · Batuhan Guler · Jacobo Roa Vicens · Ajit Desai · Jonathan Aigrain · Robert Samoilescu -
2019 Workshop: AI for Humanitarian Assistance and Disaster Response »
Ritwik Gupta · Robin Murphy · Trevor Darrell · Eric Heim · Zhangyang Wang · Bryce Goodman · Piotr Biliński -
2019 Poster: E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy »
Ziyu Jiang · Yue Wang · Xiaohan Chen · Pengfei Xu · Yang Zhao · Yingyan Lin · Zhangyang Wang -
2019 Poster: Learning to Optimize in Swarms »
Yue Cao · Tianlong Chen · Zhangyang Wang · Yang Shen -
2019 Poster: Model Compression with Adversarial Robustness: A Unified Optimization Framework »
Shupeng Gui · Haotao Wang · Haichuan Yang · Chen Yu · Zhangyang Wang · Ji Liu -
2018 : Posters and Open Discussions (see below for poster titles) »
Ramya Malur Srinivasan · Miguel Perez · Yuanyuan Liu · Ben Wood · Dan Philps · Kyle Brown · Daniel Martin · Mykola Pechenizkiy · Luca Costabello · Rongguang Wang · Suproteem Sarkar · Sangwoong Yoon · Zhuoran Xiong · Enguerrand Horel · Zhu (Drew) Zhang · Ulf Johansson · Jonathan Kochems · Gregory Sidier · Prashant Reddy · Lana Cuthbertson · Yvonne Wambui · Christelle Marfaing · Galen Harrison · Irene Unceta Mendieta · Thomas Kehler · Mark Weber · Li Ling · Ceena Modarres · Abhinav Dhall · Arash Nourian · David Byrd · Ajay Chander · Xiao-Yang Liu · Hongyang Yang · Shuang (Sophie) Zhai · Freddy Lecue · Sirui Yao · Rory McGrath · Artur Garcez · Vangelis Bacoyannis · Alexandre Garcia · Lukas Gonon · Mark Ibrahim · Melissa Louie · Omid Ardakanian · Cecilia Sönströd · Kojin Oshiba · Chaofan Chen · Suchen Jin · aldo pareja · Toyo Suzumura -
2018 Poster: Can We Gain More from Orthogonality Regularizations in Training Deep Networks? »
Nitin Bansal · Xiaohan Chen · Zhangyang Wang -
2018 Poster: Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds »
Xiaohan Chen · Jialin Liu · Zhangyang Wang · Wotao Yin -
2018 Spotlight: Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds »
Xiaohan Chen · Jialin Liu · Zhangyang Wang · Wotao Yin