Logit-Based Losses Limit the Effectiveness of Feature Knowledge Distillation
Nicholas Cooper · Lijun Chen · Sailesh Dwivedy · Danna Gurari
Abstract
Knowledge distillation (KD) methods transfer the knowledge of a parameter-heavy teacher model to a light-weight student model. The status quo for feature KD methods is to utilize loss functions based on logits (i.e., pre-softmax class scores) and intermediate layer features (i.e., latent representations). Unlike previous approaches, we propose a feature KD framework for training the student's backbone using feature-based losses \emph{exclusively} (i.e., without logit-based losses such as cross entropy). Leveraging recent discoveries about the geometry of latent representations, we introduce a \emph{knowledge quality metric} for identifying which teacher layers provide the most effective knowledge for distillation. Experiments on three image classification datasets with four diverse student-teacher pairs, spanning convolutional neural networks and vision transformers, demonstrate our KD method achieves state-of-the-art performance, delivering top-1 accuracy boosts of up to $15$% over standard approaches. We publicly share our code to facilitate future work at https://github.com/Thegolfingocto/KD_wo_CE.git .
Chat is not available.
Successful Page Load