Combining cross-modal knowledge transfer and semi-supervised learning for speech emotion recognition

Sheng Zhang, Min Chen, Jincai Chen, Yuan Fang Li, Yiling Wu, Minglei Li, Chuanbo Zhu

Research output: Contribution to journalArticleResearchpeer-review

24 Citations (Scopus)


Speech emotion recognition is an important task with a wide range of applications. However, the progress of speech emotion recognition is limited by the lack of large, high-quality labeled speech datasets, due to the high annotation cost and the inherent ambiguity in emotion labels. The recent emergence of large-scale video data makes it possible to obtain massive, though unlabeled speech data. To exploit this unlabeled data, previous works have explored semi-supervised learning methods on various tasks. However, noisy pseudo-labels remain a challenge for these methods. In this work, to alleviate the above issue, we propose a new architecture that combines cross-modal knowledge transfer from visual to audio modality into our semi-supervised learning method with consistency regularization. We posit that introducing visual emotional knowledge by the cross-modal transfer method can increase the diversity and accuracy of pseudo-labels and improve the robustness of the model. To combine knowledge from cross-modal transfer and semi-supervised learning, we design two fusion algorithms, i.e. weighted fusion and consistent & random. Our experiments on CH-SIMS and IEMOCAP datasets show that our method can effectively use additional unlabeled audio-visual data to outperform state-of-the-art results.

Original languageEnglish
Article number107340
Number of pages10
JournalKnowledge-Based Systems
Publication statusPublished - 11 Oct 2021


  • Cross-modal knowledge transfer
  • Semi-supervised learning
  • Speech emotion recognition

Cite this