Abstract
This paper presents analysis of a previously recorded multimodal interaction dataset. The primary purpose of that dataset is to explore patterns in the focus of visual attention of humans under three different conditions - two humans involved in task-based interaction with a robot; the same two humans involved in task-based interaction where the robot is replaced by a third human, and a free three-party human interaction. The paper presents a data-driven methodology for automatic visual identification of the active speaker based on facial action units (AUs). The paper also presents an evaluation of the proposed methodology on 12 different interactions with an approximate length of 4 hours. The methodology will be implemented on a robot and used to generate natural focus of visual attention behavior during multi-party human-robot interactions.
Original language | English |
---|---|
Title of host publication | 2nd Workshop on Advancements in Social Signal Processing for Multimodal Interaction 2016 (ASSP4MI2016) |
Editors | Louis-Philippe Morency, Carlos Busso, Catherine Pelachaud |
Place of Publication | New York NY USA |
Publisher | Association for Computing Machinery (ACM) |
Pages | 22-27 |
Number of pages | 6 |
ISBN (Electronic) | 9781450345576 |
DOIs | |
Publication status | Published - 2016 |
Externally published | Yes |
Event | Workshop on Advancements in Social Signal Processing for Multimodal Interaction 2016 - Tokyo, Japan Duration: 16 Nov 2016 → 16 Nov 2016 Conference number: 2nd https://dl.acm.org/doi/proceedings/10.1145/3005467 (Proceedings) https://web.archive.org/web/20160804170758/https://wwwhome.ewi.utwente.nl/~truongkp/icmi2016-assp4mi (Website) |
Conference
Conference | Workshop on Advancements in Social Signal Processing for Multimodal Interaction 2016 |
---|---|
Abbreviated title | ASSP4MI 2016 |
Country | Japan |
City | Tokyo |
Period | 16/11/16 → 16/11/16 |
Internet address |
Keywords
- Active speaker identification
- Human-robot interaction
- Multi-modal interaction