MOLTR: multiple object localisation, tracking and reconstruction from monocular RGB videos

Kejie Li, Hamid Rezatofighi, Ian Reid

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Semantic aware reconstruction is more advantageous than geometric-only reconstruction for future robotic and AR/VR applications because it represents not only where things are, but also what things are. Object-centric mapping is a task to build an object-level reconstruction where objects are separate and meaningful entities that convey both geometry and semantic information. In this paper, we present MO-LTR, a solution to object-centric mapping using only monocular image sequences and camera poses. It is able to localize, track, and reconstruct multiple rigid objects in an online fashion when a RGB camera captures a video of the surrounding. Given a new RGB frame, MO-LTR firstly applies a monocular 3D detector to localize objects of interest and extract their shape codes that represent the object shape in a learnt embedding space. Detections are then merged to existing objects in the map after data association. Motion state (i.e. kinematics and the motion status) of each object is tracked by a multiple model Bayesian filter and object shape is progressively refined by fusing multiple shape code. We evaluate localization, tracking, and reconstruction on benchmarking datasets for indoor and outdoor scenes, and show superior performance over previous approaches.

Original languageEnglish
Pages (from-to)3341-3348
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number2
DOIs
Publication statusPublished - Apr 2021

Keywords

  • Cameras
  • Deep Learning for Visual Perception
  • Image reconstruction
  • Mapping
  • Recognition
  • Semantics
  • Shape
  • Three-dimensional displays
  • Tracking
  • Two dimensional displays

Cite this