Timezone: »
Minimizing non-convex and high-dimensional objective functions is challenging, especially when training modern deep neural networks. In this paper, a novel approach is proposed which divides the training process into two consecutive phases to obtain better generalization performance: Bayesian sampling and stochastic optimization. The first phase is to explore the energy landscape and to capture the `fat'' modes; and the second one is to fine-tune the parameter learned from the first phase. In the Bayesian learning phase, we apply continuous tempering and stochastic approximation into the Langevin dynamics to create an efficient and effective sampler, in which the temperature is adjusted automatically according to the designed ``temperature dynamics''. These strategies can overcome the challenge of early trapping into bad local minima and have achieved remarkable improvements in various types of neural networks as shown in our theoretical analysis and empirical experiments.
Author Information
Nanyang Ye (University of Cambridge)
Zhanxing Zhu (Peking University)
Rafal Mantiuk (University of Cambridge)
More from the Same Authors
-
2021 Spotlight: Spherical Motion Dynamics: Learning Dynamics of Normalized Neural Network using SGD and Weight Decay »
Ruosi Wan · Zhanxing Zhu · Xiangyu Zhang · Jian Sun -
2021 Poster: Spherical Motion Dynamics: Learning Dynamics of Normalized Neural Network using SGD and Weight Decay »
Ruosi Wan · Zhanxing Zhu · Xiangyu Zhang · Jian Sun -
2020 Poster: Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework »
Dinghuai Zhang · Mao Ye · Chengyue Gong · Zhanxing Zhu · Qiang Liu -
2020 Poster: Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher »
Guangda Ji · Zhanxing Zhu -
2019 Poster: You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle »
Dinghuai Zhang · Tianyuan Zhang · Yiping Lu · Zhanxing Zhu · Bin Dong -
2018 Poster: Thermostat-assisted continuously-tempered Hamiltonian Monte Carlo for Bayesian learning »
Rui Luo · Jianhong Wang · Yaodong Yang · Jun WANG · Zhanxing Zhu -
2018 Poster: Reinforced Continual Learning »
Ju Xu · Zhanxing Zhu -
2018 Poster: Bayesian Adversarial Learning »
Nanyang Ye · Zhanxing Zhu