Recognizing emotions in spoken dialogue with acoustic and lexical cues

Leimin Tian, Johanna D. Moore, Catherine Lai

Research output: Chapter in Book/Report/Conference proceedingConference PaperOther

2 Citations (Scopus)

Abstract

Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user's emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledgeinspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.

Original languageEnglish
Title of host publicationISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents
Subtitle of host publicationNovember 13, 2017 Glasgow, UK
EditorsThierry Chaminade, Noël Ngyuen, Magalie Ochs, Fabrice Lefèvre
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Pages45-46
Number of pages2
ISBN (Electronic)9781450355582
DOIs
Publication statusPublished - 2017
Externally publishedYes
EventACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, ISIAA 2017 - Glasgow, United Kingdom
Duration: 13 Nov 2017 → …
Conference number: 1st

Conference

ConferenceACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, ISIAA 2017
CountryUnited Kingdom
CityGlasgow
Period13/11/17 → …

Keywords

  • Affective computing
  • Dialogue
  • Emotion
  • LSTM
  • Multimodal

Cite this

Tian, L., Moore, J. D., & Lai, C. (2017). Recognizing emotions in spoken dialogue with acoustic and lexical cues. In T. Chaminade, N. Ngyuen, M. Ochs, & F. Lefèvre (Eds.), ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents: November 13, 2017 Glasgow, UK (pp. 45-46). Association for Computing Machinery (ACM). https://doi.org/10.1145/3139491.3139497