Abstract
Context plays a significant role in the generation of motion for dynamic agents in interactive environments. This work proposes a modular method that utilises a learned model of the environment for motion prediction and explicitly allows for unsupervised adaptation of trajectory prediction models to unseen environments and new tasks by decoupling per-agent dynamics and environment modelling. Modelling both the spatial and dynamic aspects of a given environment alongside the per agent behaviour results in more informed motion prediction and allows for performance comparable to the state-of-the-art. We highlight the model's prediction capability using a benchmark pedestrian prediction problem and a robot manipulation task and show that we can transfer the predictor across these tasks in a completely unsupervised way. The proposed approach allows for robust and label efficient forward modelling, and relaxes the need for full model re-training in new environments.
Original language | English |
---|---|
Pages (from-to) | 707-714 |
Journal | IEEE Robotics and Automation Letters |
Volume | 6 |
Issue number | 2 |
DOIs | |
Publication status | Published - Apr 2021 |
Keywords
- Adaptation models
- Data models
- Dynamics
- Novel Deep Learning Methods
- Predictive models
- Representation Learning
- Task analysis
- Trajectory
- Vehicle dynamics