Abstract
We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.
Original language | English |
---|---|
Title of host publication | Proceedings of the 14th ACM International Conference on Multimodal Interaction |
Publisher | Association for Computing Machinery (ACM) |
Pages | 293-294 |
Number of pages | 2 |
ISBN (Print) | 9781450314671 |
DOIs | |
Publication status | Published - 2012 |
Externally published | Yes |
Event | International Conference on Multimodal Interfaces 2012 - Santa Monica, United States of America Duration: 22 Oct 2012 → 26 Oct 2012 Conference number: 14th https://dl.acm.org/doi/proceedings/10.1145/2388676 (Proceedings) |
Publication series
Name | ICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction |
---|
Conference
Conference | International Conference on Multimodal Interfaces 2012 |
---|---|
Abbreviated title | ICMI 2012 |
Country/Territory | United States of America |
City | Santa Monica |
Period | 22/10/12 → 26/10/12 |
Internet address |
|
Keywords
- Facial animation
- Furhat
- Gaze
- Gesture
- Microphone tracking
- Multimodal systems
- Multiparty interaction
- Robot head
- Speech
- Spoken dialog