Public speaking training with a multimodal interactive virtual audience framework - demonstration

Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer

Research output: Chapter in Book/Report/Conference proceedingConference PaperOtherpeer-review

15 Citations (Scopus)


We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).

Original languageEnglish
Title of host publicationProceedings of the 2015 ACM International Conference on Multimodal Interaction
EditorsDan Bohus, Radu Horaud, Helen Meng
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Number of pages2
ISBN (Electronic)9781450339124
Publication statusPublished - 2015
Externally publishedYes
EventInternational Conference on Multimodal Interfaces 2015 - Seattle, United States of America
Duration: 9 Nov 201513 Nov 2015
Conference number: 17th (Proceedings)


ConferenceInternational Conference on Multimodal Interfaces 2015
Abbreviated titleICMI 2015
Country/TerritoryUnited States of America
Internet address


  • Automatic behavior recognition
  • Public speaking training
  • Virtual audience

Cite this