Public speaking training with a multimodal interactive virtual audience framework - demonstration

Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer

Research output: Chapter in Book/Report/Conference proceedingConference PaperOtherpeer-review

11 Citations (Scopus)

Abstract

We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).

Original languageEnglish
Title of host publicationProceedings of the 2015 ACM International Conference on Multimodal Interaction
EditorsDan Bohus, Radu Horaud, Helen Meng
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Pages367-368
Number of pages2
ISBN (Electronic)9781450339124
DOIs
Publication statusPublished - 2015
Externally publishedYes
EventInternational Conference on Multimodal Interfaces 2015 - Seattle, United States of America
Duration: 9 Nov 201513 Nov 2015
Conference number: 17th
https://icmi.acm.org/2015/
https://dl.acm.org/doi/proceedings/10.1145/2818346 (Proceedings)

Conference

ConferenceInternational Conference on Multimodal Interfaces 2015
Abbreviated titleICMI 2015
CountryUnited States of America
CitySeattle
Period9/11/1513/11/15
Internet address

Keywords

  • Automatic behavior recognition
  • Public speaking training
  • Virtual audience

Cite this