TY - JOUR
T1 - Teacher–student complementary sample contrastive distillation
AU - Bao, Zhiqiang
AU - Huang, Zhenhua
AU - Gou, Jianping
AU - Du, Lan
AU - Liu, Kang
AU - Zhou, Jingtao
AU - Chen, Yunwen
N1 - Funding Information:
This work was supported by the National Natural Science Foundation of China ( 62172166 , 61772366 ) and the Guangdong Basic and Applied Basic Research Foundation, China ( 2022A1515011380 ).
Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2024/2
Y1 - 2024/2
N2 - Knowledge distillation (KD) is a widely adopted model compression technique for improving the performance of compact student models, by utilizing the “dark knowledge” of a large teacher model. However, previous studies have not adequately investigated the effectiveness of supervision from the teacher model, and overconfident predictions in the student model may degrade its performance. In this work, we propose a novel framework, Teacher–Student Complementary Sample Contrastive Distillation (TSCSCD), that alleviate these challenges. TSCSCD consists of three key components: Contrastive Sample Hardness (CSH), Supervision Signal Correction (SSC), and Student Self-Learning (SSL). Specifically, CSH evaluates the teacher's supervision for each sample by comparing the predictions of two compact models, one distilled from the teacher and the other trained from scratch. SSC corrects weak supervision according to CSH, while SSL employs integrated learning among multi-classifiers to regularize overconfident predictions. Extensive experiments on four real-world datasets demonstrate that TSCSCD outperforms recent state-of-the-art knowledge distillation techniques.
AB - Knowledge distillation (KD) is a widely adopted model compression technique for improving the performance of compact student models, by utilizing the “dark knowledge” of a large teacher model. However, previous studies have not adequately investigated the effectiveness of supervision from the teacher model, and overconfident predictions in the student model may degrade its performance. In this work, we propose a novel framework, Teacher–Student Complementary Sample Contrastive Distillation (TSCSCD), that alleviate these challenges. TSCSCD consists of three key components: Contrastive Sample Hardness (CSH), Supervision Signal Correction (SSC), and Student Self-Learning (SSL). Specifically, CSH evaluates the teacher's supervision for each sample by comparing the predictions of two compact models, one distilled from the teacher and the other trained from scratch. SSC corrects weak supervision according to CSH, while SSL employs integrated learning among multi-classifiers to regularize overconfident predictions. Extensive experiments on four real-world datasets demonstrate that TSCSCD outperforms recent state-of-the-art knowledge distillation techniques.
KW - Deep learning
KW - Knowledge distillation
KW - Model regularization
KW - Sample hardness
KW - Transfer learning
UR - https://www.scopus.com/pages/publications/85177879496
U2 - 10.1016/j.neunet.2023.11.036
DO - 10.1016/j.neunet.2023.11.036
M3 - Article
C2 - 37989039
AN - SCOPUS:85177879496
SN - 0893-6080
VL - 170
SP - 176
EP - 189
JO - Neural Networks
JF - Neural Networks
ER -