Multimodal speech-gesture interface for handfree painting on a virtual paper using partial recurrent neural networks as gesture recognizer

Andrea Corradini, Philip R. Cohen

Research output: Contribution to conferencePaperpeer-review

22 Citations (Scopus)


We describe a pointing and speech alternative to the current paint programs based on traditional devices like mouse, pen or keyboard. We used a simple magnetic field tracker-based pointing system as input device for a painting system to provide a convenient means for the user to specify paint locations on any virtual paper. The virtual paper itself is determined by the operator as a limited plane surface in the three dimensional space. Drawing occurs with natural human pointing by using the hand to define a line in space, and considering its possible intersection point with this plane. The recognition of pointing gestures occurs by means of a partial recurrent artificial neural network. Gestures along with several vocal commands are utilized to act on the current painting in conformity with a predefined grammar.

Original languageEnglish
Number of pages6
Publication statusPublished - 1 Jan 2002
Externally publishedYes
EventIEEE International Joint Conference on Neural Networks 2002 - Honolulu, United States of America
Duration: 12 May 200217 May 2002 (Proceedings)


ConferenceIEEE International Joint Conference on Neural Networks 2002
Abbreviated titleIJCNN 2002
Country/TerritoryUnited States of America
Internet address


  • Augmented and virtual reality
  • Communication agent
  • Multimodal system
  • Painting tool
  • Partial recurrent artificial neural network
  • Pointing gesture
  • Speech recognition
  • User-centered interface

Cite this