Hybrid position-based visual servoing

Geoffrey Taylor, Lindsay Kleeman

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review

Abstract

The preceding chapters developed a framework for perception based on automatic extraction of 3D models from range data for object classification and tracking. In robotics, however, perception is only ever half the story! This chapter addresses the complementary problem of controlling a robotic manipulator to interact with the perceived world. Specifically, the controller must be able to drive an end-effector to some desired pose relative to a detected object. The traditional solution is kinematic control, in which joint angles form the control error and the pose of the end-effector pose is reconstructed through inverse kinematics. This approach can be effective for service robots when the camera parameters and kinematic model are well calibrated, as demonstrated in [10]. However, it is generally accepted that kinematic control deteriorates with increasing mechanical complexity [45]. Economic constraints impose additional limitations on the accuracy of calibration, including low manufacturing tolerances, cheap sensors and lightweight, compliant limbs for efficiency and safety. Achieving reliable, long term operation in an unpredictable environment reinforces the need to tolerate the effects of wear on sensors and mechanical components. Clearly, a more robust control solution is required.

Original languageEnglish
Title of host publicationRobotic Manipulation
Subtitle of host publication3D Object Recognition, Tracking and Hand-Eye Coordination
Pages115-144
Number of pages30
DOIs
Publication statusPublished - 27 Sept 2006

Publication series

NameSpringer Tracts in Advanced Robotics
Volume26
ISSN (Print)1610-7438
ISSN (Electronic)1610-742X

Cite this