Gesture and Gaze: Multimodal Data in Dyadic Interactions

Bertrand Schneider, Marcelo Worsley, Roberto Martinez-Maldonado

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review

Abstract

With the advent of new and affordable sensing technologies, CSCL researchers are able to automatically capture collaborative interactions with unprecedented levels of accuracy. This development opens new opportunities and challenges for the field. In this chapter, we describe empirical studies and theoretical frameworks that leverage multimodal sensors to study dyadic interactions. More specifically, we focus on gaze and gesture sensing and how these measures can be associated with constructs such as learning, interaction, and collaboration strategies in colocated settings. We briefly describe the history of the development of multimodal analytics methodologies in CSCL, the state of the art of this area of research, and how data fusion and human-centered techniques are most needed to give meaning to multimodal data when studying collaborative learning groups. We conclude by discussing the future of these developments and their implications for CSCL researchers.
Original languageEnglish
Title of host publicationInternational Handbook of Computer-Supported Collaborative Learning
EditorsUlrike Cress, Carolyn Rosé, Alyssa Friend Wise, Jun Oshima
Place of PublicationCham Switzerland
PublisherSpringer
Pages625-641
Number of pages17
ISBN (Electronic)9783030652906
ISBN (Print)9783030652913
DOIs
Publication statusPublished - 2021

Publication series

NameComputer-Supported Collaborative Learning Series
PublisherSpringer Nature Switzerland AG
Volume19
ISSN (Print)1573-4552
ISSN (Electronic)2543-0157

Cite this