ARAnimator: In-situ character animation in mobile AR with user-defined motion gestures

Hui Ye, Kin Chung Kwan, Wanchao Su, Hongbo Fu

Research output: Contribution to journalArticleResearchpeer-review

29 Citations (Scopus)


Creating animated virtual AR characters closely interacting with real environments is interesting but difficult. Existing systems adopt video see-through approaches to indirectly control a virtual character in mobile AR, making close interaction with real environments not intuitive. In this work we use an AR-enabled mobile device to directly control the position and motion of a virtual character situated in a real environment. We conduct two guessability studies to elicit user-defined motions of a virtual character interacting with real environments, and a set of user-defined motion gestures describing specific character motions. We found that an SVM-based learning approach achieves reasonably high accuracy for gesture classification from the motion data of a mobile device. We present ARAnimator, which allows novice and casual animation users to directly represent a virtual character by an AR-enabled mobile phone and control its animation in AR scenes using motion gestures of the device, followed by animation preview and interactive editing through a video see-through interface. Our experimental results show that with ARAnimator, users are able to easily create in-situ character animations closely interacting with different real environments.

Original languageEnglish
Article number83
Number of pages12
JournalACM Transactions on Graphics
Issue number4
Publication statusPublished - 12 Aug 2020
Externally publishedYes


  • character animation
  • gesture Classification
  • interactive system
  • mobile augmented reality
  • user defined gestures

Cite this