Learning Structured Representations of Spatial and Interactive Dynamics for Trajectory Prediction in Crowded Scenes

Todor Bozhinov Davchev, Michael Burke, Subramanian Ramamoorthy

Research output: Contribution to journalArticleResearchpeer-review


Context plays a significant role in the generation of motion for dynamic agents in interactive environments. This work proposes a modular method that utilises a learned model of the environment for motion prediction and explicitly allows for unsupervised adaptation of trajectory prediction models to unseen environments and new tasks by decoupling per-agent dynamics and environment modelling. Modelling both the spatial and dynamic aspects of a given environment alongside the per agent behaviour results in more informed motion prediction and allows for performance comparable to the state-of-the-art. We highlight the model's prediction capability using a benchmark pedestrian prediction problem and a robot manipulation task and show that we can transfer the predictor across these tasks in a completely unsupervised way. The proposed approach allows for robust and label efficient forward modelling, and relaxes the need for full model re-training in new environments.

Original languageEnglish
Pages (from-to)707-714
JournalIEEE Robotics and Automation Letters
Issue number2
Publication statusPublished - Apr 2021


  • Adaptation models
  • Data models
  • Dynamics
  • Novel Deep Learning Methods
  • Predictive models
  • Representation Learning
  • Task analysis
  • Trajectory
  • Vehicle dynamics

Cite this