Modeling multimodal integration patterns and performance in seniors: toward adaptive processing of individual differences

Benfang Xiao, Rebecca Lunsford, Rachel Coulston, Matt Wesson, Sharon Oviatt

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

36 Citations (Scopus)


Multimodal interfaces are designed with a focus on flexibility, although very few currently are capable of adapting to major sources of user, task, or environmental variation. The development of adaptive multimodal processing techniques will require empirical guidance from quantitative modeling on key aspects of individual differences, especially as users engage in different types of tasks in different usage contexts. In the present study, data were collected from fifteen 66- to 86-year-old healthy seniors as they interacted with a map-based flood management system using multimodal speech and pen input. A comprehensive analysis of multimodal integration patterns revealed that seniors were classifiable as either simultaneous or sequential integrators, like children and adults. Seniors also demonstrated early predictability and a high degree of consistency in their dominant integration pattern. However, greater individual differences in multimodal integration generally were evident in this population. Perhaps surprisingly, during sequential constructions seniors' intermodal lags were no longer in average and maximum duration than those of younger adults, although both of these groups had longer maximum lags than children. However, an analysis of seniors' performance did reveal lengthy latencies before initiating a task, and high rates of self talk and task-critical errors while completing spatial tasks. All of these behaviors were magnified as the task difficulty level increased. Results of this research have implications for the design of adaptive processing strategies appropriate for seniors' applications, especially for the development of temporal thresholds used during multimodal fusion. The long-term goal of this research is the design of high-performance multimodal systems that adapt to a full spectrum of diverse users, supporting tailored and robust future systems.

Original languageEnglish
Title of host publicationICMI'03
Subtitle of host publicationFifth International Conference on Multimodal Interfaces
Place of PublicationNew York NY USA
PublisherAssociation for Computing Machinery (ACM)
Number of pages8
ISBN (Print)1581136218
Publication statusPublished - 2003
Externally publishedYes
EventInternational Conference on Multimodal Interfaces 2003 - Vancouver, Canada
Duration: 5 Nov 20037 Nov 2003
Conference number: 5th (Proceedings)


ConferenceInternational Conference on Multimodal Interfaces 2003
Abbreviated titleICMI 2003
Internet address


  • Human performance errors
  • Multimodal integration
  • Self-regulatory language
  • Senior users
  • Speech and pen input
  • Task difficulty

Cite this