Recognizing emotions in spoken dialogue with acoustic and lexical cues

Leimin Tian, Johanna D. Moore, Catherine Lai

Research output: Chapter in Book/Report/Conference proceedingConference PaperOther

Abstract

Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user's emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledgeinspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.

Original languageEnglish
Title of host publicationISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents
Subtitle of host publicationNovember 13, 2017 Glasgow, UK
EditorsThierry Chaminade, Noël Ngyuen, Magalie Ochs, Fabrice Lefèvre
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Pages45-46
Number of pages2
ISBN (Electronic)9781450355582
DOIs
Publication statusPublished - 2017
Externally publishedYes
Event1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, ISIAA 2017 - Glasgow, United Kingdom
Duration: 13 Nov 2017 → …

Conference

Conference1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, ISIAA 2017
CountryUnited Kingdom
CityGlasgow
Period13/11/17 → …

Keywords

  • Affective computing
  • Dialogue
  • Emotion
  • LSTM
  • Multimodal

Cite this

Tian, L., Moore, J. D., & Lai, C. (2017). Recognizing emotions in spoken dialogue with acoustic and lexical cues. In T. Chaminade, N. Ngyuen, M. Ochs, & F. Lefèvre (Eds.), ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents: November 13, 2017 Glasgow, UK (pp. 45-46). New York NY USA : Association for Computing Machinery (ACM). https://doi.org/10.1145/3139491.3139497
Tian, Leimin ; Moore, Johanna D. ; Lai, Catherine. / Recognizing emotions in spoken dialogue with acoustic and lexical cues. ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents: November 13, 2017 Glasgow, UK. editor / Thierry Chaminade ; Noël Ngyuen ; Magalie Ochs ; Fabrice Lefèvre. New York NY USA : Association for Computing Machinery (ACM), 2017. pp. 45-46
@inproceedings{a244e34c82714dd6a0c571892cb3799b,
title = "Recognizing emotions in spoken dialogue with acoustic and lexical cues",
abstract = "Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user's emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledgeinspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.",
keywords = "Affective computing, Dialogue, Emotion, LSTM, Multimodal",
author = "Leimin Tian and Moore, {Johanna D.} and Catherine Lai",
year = "2017",
doi = "10.1145/3139491.3139497",
language = "English",
pages = "45--46",
editor = "Chaminade, {Thierry } and Ngyuen, {No{\"e}l } and Ochs, {Magalie } and Lef{\`e}vre, {Fabrice }",
booktitle = "ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents",
publisher = "Association for Computing Machinery (ACM)",
address = "United States of America",

}

Tian, L, Moore, JD & Lai, C 2017, Recognizing emotions in spoken dialogue with acoustic and lexical cues. in T Chaminade, N Ngyuen, M Ochs & F Lefèvre (eds), ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents: November 13, 2017 Glasgow, UK. Association for Computing Machinery (ACM), New York NY USA , pp. 45-46, 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, ISIAA 2017, Glasgow, United Kingdom, 13/11/17. https://doi.org/10.1145/3139491.3139497

Recognizing emotions in spoken dialogue with acoustic and lexical cues. / Tian, Leimin; Moore, Johanna D.; Lai, Catherine.

ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents: November 13, 2017 Glasgow, UK. ed. / Thierry Chaminade; Noël Ngyuen; Magalie Ochs; Fabrice Lefèvre. New York NY USA : Association for Computing Machinery (ACM), 2017. p. 45-46.

Research output: Chapter in Book/Report/Conference proceedingConference PaperOther

TY - GEN

T1 - Recognizing emotions in spoken dialogue with acoustic and lexical cues

AU - Tian, Leimin

AU - Moore, Johanna D.

AU - Lai, Catherine

PY - 2017

Y1 - 2017

N2 - Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user's emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledgeinspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.

AB - Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user's emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledgeinspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.

KW - Affective computing

KW - Dialogue

KW - Emotion

KW - LSTM

KW - Multimodal

UR - http://www.scopus.com/inward/record.url?scp=85041210640&partnerID=8YFLogxK

U2 - 10.1145/3139491.3139497

DO - 10.1145/3139491.3139497

M3 - Conference Paper

SP - 45

EP - 46

BT - ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents

A2 - Chaminade, Thierry

A2 - Ngyuen, Noël

A2 - Ochs, Magalie

A2 - Lefèvre, Fabrice

PB - Association for Computing Machinery (ACM)

CY - New York NY USA

ER -

Tian L, Moore JD, Lai C. Recognizing emotions in spoken dialogue with acoustic and lexical cues. In Chaminade T, Ngyuen N, Ochs M, Lefèvre F, editors, ISIAA'17 - Proceedings of the 1st ACM SIGCHI InternationalWorkshop on Investigating Social Interactions with Artificial Agents: November 13, 2017 Glasgow, UK. New York NY USA : Association for Computing Machinery (ACM). 2017. p. 45-46 https://doi.org/10.1145/3139491.3139497