Facial Motion Prior Networks for facial expression recognition

Yuedong Chen, Jianfeng Wang, Shikai Chen, Zhongchao Shi, Jianfei Cai

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

2 Citations (Scopus)

Abstract

Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-The-Art approaches.

Original languageEnglish
Title of host publication2019 IEEE International Conference on Visual Communications and Image Processing (VCIP 2019)
EditorsMark Pickering, Qiang Wu, Lei Wang, Jiaying Liu
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages4
ISBN (Electronic)9781728137230
ISBN (Print)9781728137247
DOIs
Publication statusPublished - 2019
EventIEEE Visual Communications and Image Processing 2019 - Sydney, Australia
Duration: 1 Dec 20194 Dec 2019
Conference number: 34th
http://www.vcip2019.org/

Conference

ConferenceIEEE Visual Communications and Image Processing 2019
Abbreviated titleVCIP 2019
CountryAustralia
CitySydney
Period1/12/194/12/19
Internet address

Keywords

  • deep learning
  • facial expression recognition
  • facial-motion mask
  • prior knowledge

Cite this