Emotion Recognition In The Wild Challenge 2013

Abhinav Dhall, Roland Goecke, Jyoti Joshi, Michael Wagner, Tom Gedeon

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

104 Citations (Scopus)

Abstract

Emotion recognition is a very active field of research. The Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2013 Grand Challenge consists of an audio-video based emotion classification challenges, which mimics real-world conditions. Traditionally, emotion recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such laboratory controlled data poorly represents the environment and conditions faced in real-world situations. The goal of this Grand Challenge is to define a common platform for evaluation of emotion recognition methods in real-world conditions. The database in the 2013 challenge is the Acted Facial Expression in the Wild (AFEW), which has been collected from movies showing close-to-real-world conditions.

Original languageEnglish
Title of host publicationICMI 2013 - Proceedings of the 2013 ACM International Conference on Multimodal Interaction
Pages509-515
Number of pages7
DOIs
Publication statusPublished - 1 Dec 2013
Externally publishedYes
EventACM International Conference on Multimodal Interaction, ICMI 2013 - Sydney, NSW, Australia
Duration: 9 Dec 201313 Dec 2013
Conference number: 15th

Publication series

NameICMI 2013 - Proceedings of the 2013 ACM International Conference on Multimodal Interaction

Conference

ConferenceACM International Conference on Multimodal Interaction, ICMI 2013
CountryAustralia
CitySydney, NSW
Period9/12/1313/12/13

Keywords

  • emotion recognition in the wild
  • multimodal

Cite this

Dhall, A., Goecke, R., Joshi, J., Wagner, M., & Gedeon, T. (2013). Emotion Recognition In The Wild Challenge 2013. In ICMI 2013 - Proceedings of the 2013 ACM International Conference on Multimodal Interaction (pp. 509-515). (ICMI 2013 - Proceedings of the 2013 ACM International Conference on Multimodal Interaction). https://doi.org/10.1145/2522848.2531739