Abstract
We describe our software system enabling a tight integration between vision and control modules on complex, high-DOF humanoid robots. This is demonstrated with the iCub humanoid robot performing visual object detection and reaching and grasping actions. A key capability of this system is reactive avoidance of obstacle objects detected from the video stream while carrying out reach-and-grasp tasks. The subsystems of our architecture can independently be improved and updated, for example, we show that by using machine learning techniques we can improve visual perception by collecting images during the robot's interaction with the environment. We describe the task and software design constraints that led to the layered modular system architecture.
| Original language | English |
|---|---|
| Article number | 26 |
| Number of pages | 16 |
| Journal | Frontiers Robotics AI |
| Volume | 3 |
| Issue number | May |
| DOIs | |
| Publication status | Published - 25 May 2016 |
| Externally published | Yes |
Keywords
- Eye-hand coordination
- Humanoid robots
- Machine learning
- Reactive reaching
- Robotic vision
- Software framework