Abstract
Multisensory integration is the perceptual process by which the user of a Head-Mounted Display (HMD) combines, into a single object, vision from the HMD with concurrent auditory signals. Because HMD users are usually mobile, visual and auditory information may not always be spatially congruent, yet congruence is a requirement for multisensory integration to occur. Previous research has shown that multisensory integration was less effective when the user was walking and sound was delivered via a speaker in a fixed location. In Experiment 1, we showed that people integrate information less effectively when they hear sound from a speaker while they walk rather than sit, because they experience a combination of sound motion and background motion, not because of any workload associated with walking. In Experiment 2, in which participants' multisensory integration performance did not rely on working memory, their performance is worse when they walk rather than sit when hearing sound with the earpiece, rather than in free-field. These mixed results highlight the difficulty in replicating multisensory integration research in applied contexts.
Original language | English |
---|---|
Title of host publication | 53rd Human Factors and Ergonomics Society Annual Meeting 2009, HFES 2009 |
Pages | 1131-1135 |
Number of pages | 5 |
Volume | 2 |
Publication status | Published - 1 Dec 2009 |
Externally published | Yes |
Event | International Annual Meeting of the Human Factors and Ergonomics Society 2009 - San Antonio, United States of America Duration: 19 Oct 2009 → 23 Oct 2009 Conference number: 53rd |
Conference
Conference | International Annual Meeting of the Human Factors and Ergonomics Society 2009 |
---|---|
Abbreviated title | HFES 2009 |
Country/Territory | United States of America |
City | San Antonio |
Period | 19/10/09 → 23/10/09 |