Socially and contextually aware human motion and pose forecasting

Vida Adeli, Ehsan Adeli, Ian Reid, Juan Carlos Niebles, Hamid Rezatofighi

Research output: Contribution to journalArticleResearchpeer-review

4 Citations (Scopus)

Abstract

Smooth and seamless robot navigation while interacting with humans depends on predicting human movements. Forecasting such human dynamics often involves modeling human trajectories (global motion) or detailed body joint movements (local motion). Prior work typically tackled local and global human movements separately. In this letter, we propose a novel framework to tackle both tasks of human motion (or trajectory) and body skeleton pose forecasting in a unified end-to-end pipeline. To deal with this real-world problem, we consider incorporating both scene and social contexts, as critical clues for this prediction task, into our proposed framework. To this end, we first couple these two tasks by i) encoding their history using a shared Gated Recurrent Unit (GRU) encoder and ii) applying a metric as loss, which measures the source of errors in each task jointly as a single distance. Then, we incorporate the scene context by encoding a spatio-temporal representation of the video data. We also include social clues by generating a joint feature representation from motion and pose of all individuals from the scene using a social pooling layer. Finally, we use a GRU based decoder to forecast both motion and skeleton pose. We demonstrate that our proposed framework achieves a superior performance compared to several baselines on two social datasets.

Original languageEnglish
Pages (from-to)6033-6040
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume5
Issue number4
DOIs
Publication statusPublished - Oct 2020
Externally publishedYes

Keywords

  • context-aware prediction
  • global motion
  • Human pose forecasting
  • human-robot interaction
  • social models

Cite this