Abstract
Multimodal interfaces are designed with a focus on flexibility, although very few multimodal systems currently are capable of adapting to major sources of user or environmental variation. The development of adaptive multimodal processing techniques will require empirical guidance on modeling key aspects of individual differences. In the present study, we collected data from 24 7-to-10-year-old children as they interacted using speech and pen input with an educational software prototype. A comprehensive analysis of children's multimodal integration patterns revealed that they were classifiable as either simultaneous or sequential integrators, although they more often integrated signals simultaneously than adults. During their sequential constructions, intermodal lags also ranged faster than those of adult users. The high degree of consistency and early predictability of children's integration patterns were similar to previously reported adult data. These results have implications for the development of temporal thresholds and adaptive multimodal processing strategies for children's applications. The long-term goal of this research is life-span modeling of users' integration and synchronization patterns, which will be needed to design future high-performance adaptive multimodal systems.
Original language | English |
---|---|
Title of host publication | 7th International Conference on Spoken Language Processing, ICSLP 2002 |
Subtitle of host publication | Denver; United States; 16 September 2002 through 20 September 2002 |
Pages | 629-632 |
Number of pages | 4 |
Publication status | Published - 2002 |
Externally published | Yes |
Event | 7th International Conference on Spoken Language Processing, ICSLP 2002 - Denver, United States of America Duration: 16 Sep 2002 → 20 Sep 2002 |
Conference
Conference | 7th International Conference on Spoken Language Processing, ICSLP 2002 |
---|---|
Country | United States of America |
City | Denver |
Period | 16/09/02 → 20/09/02 |