Timezone: »
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks, and to recover the sparse structure of target functions. Although tremendous empirical successes have been achieved, most sparse deep learning algorithms are lacking of theoretical supports. On the other hand, another line of works have proposed theoretical frameworks that are computationally infeasible. In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors, and develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution. The variational posterior contraction rate is provided, which justifies the consistency of the proposed variational Bayes method. Interestingly, our empirical results demonstrate that this variational procedure provides uncertainty quantification in terms of Bayesian predictive distribution and is also capable to accomplish consistent variable selection by training a sparse multi-layer neural network.
Author Information
Jincheng Bai (Purdue University)
Qifan Song (Purdue University )
Guang Cheng (Purdue University)
More from the Same Authors
-
2021 : Optimum-statistical Collaboration Towards Efficient Black-boxOptimization »
Wenjie Li · Chi-Hua Wang · Guang Cheng -
2022 Poster: Fair Bayes-Optimal Classifiers Under Predictive Parity »
Xianli Zeng · Edgar Dobriban · Guang Cheng -
2022 Poster: Why Do Artificially Generated Data Help Adversarial Robustness »
Yue Xing · Qifan Song · Guang Cheng -
2022 Poster: Phase Transition from Clean Training to Adversarial Training »
Yue Xing · Qifan Song · Guang Cheng -
2022 Poster: Support Recovery in Sparse PCA with Incomplete Data »
Hanbyul Lee · Qifan Song · Jean Honorio -
2021 Poster: On the Algorithmic Stability of Adversarial Training »
Yue Xing · Qifan Song · Guang Cheng -
2020 Poster: Statistical Guarantees of Distributed Nearest Neighbor Classification »
Jiexin Duan · Xingye Qiao · Guang Cheng -
2020 Poster: Directional Pruning of Deep Neural Networks »
Shih-Kang Chao · Zhanyu Wang · Yue Xing · Guang Cheng -
2019 Poster: Bootstrapping Upper Confidence Bound »
Botao Hao · Yasin Abbasi Yadkori · Zheng Wen · Guang Cheng -
2019 Poster: Rates of Convergence for Large-scale Nearest Neighbor Classification »
Xingye Qiao · Jiexin Duan · Guang Cheng -
2018 Poster: Early Stopping for Nonparametric Testing »
Meimei Liu · Guang Cheng -
2015 Poster: Non-convex Statistical Optimization for Sparse Tensor Graphical Model »
Wei Sun · Zhaoran Wang · Han Liu · Guang Cheng