Feature fusion-based collaborative learning for knowledge distillation

Yiting Li, Liyuan Sun, Jianping Gou, Lan Du, Weihua Ou

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Deep neural networks have achieved a great success in a variety of applications, such as self-driving cars and intelligent robotics. Meanwhile, knowledge distillation has received increasing attention as an effective model compression technique for training very efficient deep models. The performance of the student network obtained through knowledge distillation heavily depends on whether the transfer of the teacher’s knowledge can effectively guide the student training. However, most existing knowledge distillation schemes require a large teacher network pre-trained on large-scale data sets, which can increase the difficulty of knowledge distillation in different applications. In this article, we propose a feature fusion-based collaborative learning for knowledge distillation. Specifically, during knowledge distillation, it enables networks to learn from each other using the feature/response-based knowledge in different network layers. We concatenate the features learned by the teacher and the student networks to obtain a more representative feature map for knowledge transfer. In addition, we also introduce a network regularization method to further improve the model performance by providing a positive knowledge during training. Experiments and ablation studies on two widely used data sets demonstrate that the proposed method, feature fusion-based collaborative learning, significantly outperforms recent state-of-the-art knowledge distillation methods.

Original languageEnglish
Pages (from-to)1-11
Number of pages11
JournalInternational Journal of Distributed Sensor Networks
Volume17
Issue number11
DOIs
Publication statusPublished - 1 Nov 2021

Keywords

  • collaborative learning
  • deep learning
  • feature fusion
  • knowledge distillation
  • Model compression

Cite this