Driver Emotion Recognition With a Hybrid Attentional Multimodal Fusion Framework

Luntian Mou, Yiyuan Zhao, Chao Zhou, Bahareh Nakisa, Mohammad Naim Rastgoo, Lei Ma, Tiejun Huang, Baocai Yin, Ramesh Jain, Wen Gao

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Negative emotions may induce dangerous driving behaviors leading to extremely serious traffic accidents. Therefore, it is necessary to establish a system that can automatically recognize driver emotions so that some actions can be taken to avoid traffic accidents. Existing studies on driver emotion recognition have mainly used facial data and physiological data. However, there are fewer studies on multimodal data with contextual characteristics of driving. In addition, fully fusing multimodal data in the feature fusion layer to improve the performance of emotion recognition is still a challenge. To this end, we propose to recognize driver emotion using a novel multimodal fusion framework based on convolutional long-short term memory network (ConvLSTM), and hybrid attention mechanism to fuse non-invasive multimodal data of eye, vehicle, and environment. In order to verify the effectiveness of the proposed method, extensive experiments have been carried out on a dataset collected using an advanced driving simulator. The experimental results demonstrate the effectiveness of the proposed method. Finally, a preliminary exploration on the correlation between driver emotion and stress is performed.

Original languageEnglish
Pages (from-to)2970-2981
Number of pages12
JournalIEEE Transactions on Affective Computing
Volume14
Issue number4
DOIs
Publication statusPublished - 1 Oct 2023
Externally publishedYes

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Keywords

  • Attention mechanism
  • convolutional long short term memory
  • driver emotion recognition
  • driver stress
  • multimodal fusion

Cite this