Abstract
When learning sign language, feedback on accuracy is critical to vocabulary acquisition. When designing technologies to provide such visual feedback, we need to research effective ways to identify errors and present meaningful and effective feedback to learners. Motion capture technologies provide new opportunities to enhance sign language learning experiences through the presentation of visual feedback that indicates the accuracy of the signs made by learners. We designed, developed, and evaluated an embodied agent-based system for learning the location and gross motor movements of sign language vocabulary. The system presents a sign, tracks the learner's attempts at a sign, and provides visual feedback to the learner on their errors. We compared five different types of visual feedback, and in a study involving 51 participants we established that learners preferred visual feedback where their attempts at a sign were shown concurrently with the movements of the instructor with or without explicit corrections.
Original language | English |
---|---|
Title of host publication | Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, IVA 2020 |
Editors | Hannes Vilhjalmsson, Pedro Sequeira, Emily S. Cross |
Place of Publication | New York NY USA |
Publisher | Association for Computing Machinery (ACM) |
Number of pages | 8 |
ISBN (Electronic) | 9781450375863 |
DOIs | |
Publication status | Published - 2020 |
Event | Intelligent Virtual Agents 2020 - Virtual, United Kingdom Duration: 20 Oct 2020 → 22 Oct 2020 Conference number: 20th https://dl.acm.org/doi/proceedings/10.1145/3383652 (proceedings) https://iva2020.gla.ac.uk (Website) |
Conference
Conference | Intelligent Virtual Agents 2020 |
---|---|
Abbreviated title | IVA'20 |
Country/Territory | United Kingdom |
City | Virtual |
Period | 20/10/20 → 22/10/20 |
Internet address |
|
Keywords
- accessibility
- HCI
- intelligent virtual agent
- motor skill
- sign language
- visual feedback
- visualization