Timezone: »
Knowledge distillation is a strategy of training a student network with guide of the soft output from a teacher network. It has been a successful method of model compression and knowledge transfer. However, currently knowledge distillation lacks a convincing theoretical understanding. On the other hand, recent finding on neural tangent kernel enables us to approximate a wide neural network with a linear model of the network's random features. In this paper, we theoretically analyze the knowledge distillation of a wide neural network. First we provide a transfer risk bound for the linearized model of the network. Then we propose a metric of the task's training difficulty, called data inefficiency. Based on this metric, we show that for a perfect teacher, a high ratio of teacher's soft labels can be beneficial. Finally, for the case of imperfect teacher, we find that hard labels can correct teacher's wrong prediction, which explains the practice of mixing hard and soft labels.
Author Information
Guangda Ji (Peking University)
Zhanxing Zhu (Peking University)
More from the Same Authors
-
2021 Spotlight: Spherical Motion Dynamics: Learning Dynamics of Normalized Neural Network using SGD and Weight Decay »
Ruosi Wan · Zhanxing Zhu · Xiangyu Zhang · Jian Sun -
2023 Poster: Neural Lad: A Neural Latent Dynamics Framework for Times Series Modeling »
ting li · Jianguo Li · Zhanxing Zhu -
2023 Poster: Implicit Bias of (Stochastic) Gradient Descent for Rank-1 Linear Neural Network »
Bochen Lv · Zhanxing Zhu -
2021 Poster: Spherical Motion Dynamics: Learning Dynamics of Normalized Neural Network using SGD and Weight Decay »
Ruosi Wan · Zhanxing Zhu · Xiangyu Zhang · Jian Sun -
2020 Poster: Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework »
Dinghuai Zhang · Mao Ye · Chengyue Gong · Zhanxing Zhu · Qiang Liu -
2019 Poster: You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle »
Dinghuai Zhang · Tianyuan Zhang · Yiping Lu · Zhanxing Zhu · Bin Dong -
2018 Poster: Thermostat-assisted continuously-tempered Hamiltonian Monte Carlo for Bayesian learning »
Rui Luo · Jianhong Wang · Yaodong Yang · Jun WANG · Zhanxing Zhu -
2018 Poster: Reinforced Continual Learning »
Ju Xu · Zhanxing Zhu -
2018 Poster: Bayesian Adversarial Learning »
Nanyang Ye · Zhanxing Zhu -
2017 Poster: Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks »
Nanyang Ye · Zhanxing Zhu · Rafal Mantiuk