Once an object has been located and classified using the techniques in the previous chapters, the system must continue to update the estimated pose for several reasons. Clearly, the initial pose will quickly become invalid if the object is under internal or external dynamic influences. However, even if the object is static, the motion of the active cameras may dynamically bias the estimated pose through modelling errors. Even a small pose bias is sufficient to destabilize a planned grasp and cause the manipulation to fail. Tracking is therefore an important component in a robust grasping and manipulation framework. If the range sensing and segmentation methods described in Chapters 3 and 4 could be performed with sufficient speed, the tracking could be implemented by continuously repeating this process. Unfortunately, the current measurement rate (up to one minute per range scan) renders this approach unsuitable for real-time tracking. However, the textured polygonal models and initial pose information from range data segmentation present an ideal basis for 3D modelbased tracking. To close the visual feedback loop, this chapter now addresses the problem of continuously updating the pose of modelled objects.