Multimodal redundancy across handwriting and speech during computer mediated human-human interactions

Edward C. Kaiser, Paulo Barthelmess, Candice Erdmann, Phil Cohen

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

9 Citations (Scopus)


Lecturers, presenters and meeting participants often say what they publicly handwrite. In this paper, we report on three empirical explorations of such multimodal redundancy - during whiteboard presentations, during a spontaneous brainstorming meeting, and during the informal annotation and discussion of photographs. We show that redundantly presented words, compared to other words used during a presentation or meeting, tend to be topic specific and thus are likely to be out-of-vocabulary. We also show that they have significantly higher tf-idf (term frequency-inverse document frequency) weights than other words, which we argue supports the hypothesis that they are dialogue-critical words. We frame the import of these empirical findings by describing SHACER, our recently introduced Speech and HAndwriting reCognizER, which can combine information from instances of redundant handwriting and speech to dynamically learn new vocabulary.

Original languageEnglish
Title of host publicationProceedings of the SIGCHI Conference on Human Factors in Computing Systems 2007, CHI 2007
PublisherAssociation for Computing Machinery (ACM)
Number of pages10
ISBN (Print)1595935932, 9781595935939
Publication statusPublished - 22 Oct 2007
Externally publishedYes
EventInternational Conference on Human Factors in Computing Systems 2007 - San Jose, United States of America
Duration: 28 Apr 20073 May 2007
Conference number: 25th

Publication series

NameConference on Human Factors in Computing Systems - Proceedings


ConferenceInternational Conference on Human Factors in Computing Systems 2007
Abbreviated titleCHI 2007
Country/TerritoryUnited States of America
CitySan Jose


  • Handwriting
  • Multimodal
  • Speech

Cite this