Abstract
This paper identifies, by means of video and Kinect data, a set of predictors that estimate the presentation skills of 448 individual students. Two evaluation criteria were predicted: eye contact and posture and body language. Machine-learning evaluations resulted in models that predicted the perfor- mance level (good or poor) of the presenters with 68% and 63% of correctly classified instances, for eye contact and postures and body language criteria, respectively. Furthermore, the results suggest that certain features, such as arms movement and smoothness, provide high significance on predicting the level of development for presentation skills. The paper finishes with conclusions and related ideas for future work.
Original language | English |
---|---|
Title of host publication | MLA 2014 - Proceedings of the 2014 ACM Multimodal Learning Analytics Workshop and Grand Challenge, Co-located with ICMI 2014 |
Publisher | Association for Computing Machinery (ACM) |
Pages | 53-60 |
Number of pages | 8 |
ISBN (Electronic) | 9781450304887 |
DOIs | |
Publication status | Published - 12 Nov 2014 |
Externally published | Yes |
Event | Multimodal Learning Analytics Workshop and Grand Challenges 2014 - Istanbul, Türkiye Duration: 12 Nov 2014 → 12 Nov 2014 Conference number: 3rd http://icmi.acm.org.ezproxy.lib.monash.edu.au/2014/ |
Workshop
Workshop | Multimodal Learning Analytics Workshop and Grand Challenges 2014 |
---|---|
Abbreviated title | MLA 2014 |
Country/Territory | Türkiye |
City | Istanbul |
Period | 12/11/14 → 12/11/14 |
Other | Event = 16th ACM International Conference on Multimodal Interaction, ICMI 2014 - Istanbul, Turkey Duration: Nov 12 2014 → Nov 16 2014 Proceedings title = ICMI 2014 - Proceedings of the 2014 International Conference on Multimodal Interaction |
Internet address |
Keywords
- Multimodal
- Presentation skills
- Video features