Abstract
In this paper, we present an end-to-end system for enhancing the effectiveness of non-verbal gestures in human robot interaction. We identify prominently used gestures in performances by TED talk speakers and map them to their corresponding speech context and modulated speech based upon the attention of the listener. Gestures are localised with convolution neural networks based approach. Dominant gestures of TED speakers are used for learning the gesture-to-speech mapping. We evaluated the engagement of the robot with people by conducting a social survey. The effectiveness of the performance was monitored by the robot and it self-improvised its speech pattern on the basis of the attention level of the audience, which was calculated using visual feedback from the camera. The effectiveness of interaction as well as the decisions made during improvisation was further evaluated based on the head-pose detection and an interaction survey.
Original language | English |
---|---|
Title of host publication | 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) |
Editors | John-John Cabibihan, Mary Anne Williams, T. Asokan, Laxmidhar Behera |
Place of Publication | Piscataway NJ USA |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Number of pages | 7 |
ISBN (Electronic) | 9781728126227 |
ISBN (Print) | 9781728126234 |
DOIs | |
Publication status | Published - 2019 |
Event | IEEE/RSJ International Symposium on Robot and Human Interactive Communication 2019 - New Delhi, India Duration: 14 Oct 2019 → 18 Oct 2019 Conference number: 28th https://ro-man2019.org/ https://ieeexplore.ieee.org/xpl/conhome/8951224/proceeding (Proceedings) |
Conference
Conference | IEEE/RSJ International Symposium on Robot and Human Interactive Communication 2019 |
---|---|
Abbreviated title | RO-MAN 2019 |
Country/Territory | India |
City | New Delhi |
Period | 14/10/19 → 18/10/19 |
Internet address |