Multimodal multiparty social interaction with the Furhat head

Samer Al Moubayed, Gabriel Skantze, Jonas Beskow, Kalin Stefanov, Joakim Gustafson

Research output: Chapter in Book/Report/Conference proceedingConference PaperOtherpeer-review

5 Citations (Scopus)

Abstract

We will show in this demonstrator an advanced multimodal and multiparty spoken conversational system using Furhat, a robot head based on projected facial animation. Furhat is a human-like interface that utilizes facial animation for physical robot heads using back-projection. In the system, multimodality is enabled using speech and rich visual input signals such as multi-person real-time face tracking and microphone tracking. The demonstrator will showcase a system that is able to carry out social dialogue with multiple interlocutors simultaneously with rich output signals such as eye and head coordination, lips synchronized speech synthesis, and non-verbal facial gestures used to regulate fluent and expressive multiparty conversations.

Original languageEnglish
Title of host publicationProceedings of the 14th ACM International Conference on Multimodal Interaction
PublisherAssociation for Computing Machinery (ACM)
Pages293-294
Number of pages2
ISBN (Print)9781450314671
DOIs
Publication statusPublished - 2012
Externally publishedYes
EventInternational Conference on Multimodal Interfaces 2012 - Santa Monica, United States of America
Duration: 22 Oct 201226 Oct 2012
Conference number: 14th
https://dl.acm.org/doi/proceedings/10.1145/2388676 (Proceedings)

Publication series

NameICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction

Conference

ConferenceInternational Conference on Multimodal Interfaces 2012
Abbreviated titleICMI 2012
CountryUnited States of America
CitySanta Monica
Period22/10/1226/10/12
Internet address

Keywords

  • Facial animation
  • Furhat
  • Gaze
  • Gesture
  • Microphone tracking
  • Multimodal systems
  • Multiparty interaction
  • Robot head
  • Speech
  • Spoken dialog

Cite this