Abstract
By modeling difficult sources of linguistic variability in speech and language, we can design interfaces that transparently guide human input to match system processing capabilities. Such work will yield more user-centered and robust interfaces for next-generation spoken language and multimodal systems.
Original language | English |
---|---|
Pages (from-to) | 26-35 |
Number of pages | 10 |
Journal | IEEE Multimedia |
Volume | 3 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1996 |
Externally published | Yes |