Abstract
To set the stage for this multidisciplinary discussion among experts on the challenging topic of learning with multimodal technology, weasksomebasic questions:
• What have neuroscience, cognitive and learning sciences, and humancomputer interaction findings taught us about how humans learn?
• What are the implications for designing more effective educational technologies, in particular ones that leverage emerging multimodal-multisensor capabilities?
Computer technologies are becoming an increasingly influential set of tools in a student's classroom experience. However, large-scale assessments conducted on laptop initiatives and other technology programs have not always yielded significantly improved student performance on standardized achievement tests or cognitive measures. The purpose of this chapter is to discuss what role multimodalmultisensor technologies potentially could play in improving support for learning and education. The expert neuroscience, cognitive and learning sciences, and computer science discussants involved in this exchange consider what we know about how multisensory-multimodal learning occurs, and the implications for how we could develop multimodal-multisensor interfaces that more effectively stimulate thinking and learning than past computer interfaces.
• What have neuroscience, cognitive and learning sciences, and humancomputer interaction findings taught us about how humans learn?
• What are the implications for designing more effective educational technologies, in particular ones that leverage emerging multimodal-multisensor capabilities?
Computer technologies are becoming an increasingly influential set of tools in a student's classroom experience. However, large-scale assessments conducted on laptop initiatives and other technology programs have not always yielded significantly improved student performance on standardized achievement tests or cognitive measures. The purpose of this chapter is to discuss what role multimodalmultisensor technologies potentially could play in improving support for learning and education. The expert neuroscience, cognitive and learning sciences, and computer science discussants involved in this exchange consider what we know about how multisensory-multimodal learning occurs, and the implications for how we could develop multimodal-multisensor interfaces that more effectively stimulate thinking and learning than past computer interfaces.
Original language | English |
---|---|
Title of host publication | The Handbook of Multimodal-Multisensor Interfaces, Volume 1 |
Subtitle of host publication | Foundations, User Modeling, and Common Modality Combinations |
Editors | Sharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger |
Place of Publication | New York NY USA |
Publisher | Association for Computing Machinery (ACM) |
Chapter | 13 |
Pages | 547-570 |
Number of pages | 24 |
ISBN (Electronic) | 9781970001655, 9781970001662 |
ISBN (Print) | 9781970001679, 9781970001648 |
DOIs | |
Publication status | Published - 2017 |
Externally published | Yes |