Multi-cue 3D model-based object tracking

Geoffrey Taylor, Lindsay Kleeman

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review


Once an object has been located and classified using the techniques in the previous chapters, the system must continue to update the estimated pose for several reasons. Clearly, the initial pose will quickly become invalid if the object is under internal or external dynamic influences. However, even if the object is static, the motion of the active cameras may dynamically bias the estimated pose through modelling errors. Even a small pose bias is sufficient to destabilize a planned grasp and cause the manipulation to fail. Tracking is therefore an important component in a robust grasping and manipulation framework. If the range sensing and segmentation methods described in Chapters 3 and 4 could be performed with sufficient speed, the tracking could be implemented by continuously repeating this process. Unfortunately, the current measurement rate (up to one minute per range scan) renders this approach unsuitable for real-time tracking. However, the textured polygonal models and initial pose information from range data segmentation present an ideal basis for 3D modelbased tracking. To close the visual feedback loop, this chapter now addresses the problem of continuously updating the pose of modelled objects.

Original languageEnglish
Title of host publicationRobotic Manipulation
Subtitle of host publication3D Object Recognition, Tracking and Hand-Eye Coordination
Number of pages29
Publication statusPublished - 27 Sep 2006

Publication series

NameSpringer Tracts in Advanced Robotics
ISSN (Print)1610-7438
ISSN (Electronic)1610-742X

Cite this