Abstract
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the right action, i.e. the action with the best possible improvement of the detector.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Pages | 3355-3362 |
Number of pages | 8 |
ISBN (Electronic) | 9781479914845 |
DOIs | |
Publication status | Published - 3 Sept 2014 |
Externally published | Yes |
Event | IEEE International Joint Conference on Neural Networks 2014 - Beijing, China Duration: 6 Jul 2014 → 11 Jul 2014 https://ieeexplore.ieee.org/xpl/conhome/6880678/proceeding (Proceedings) |
Conference
Conference | IEEE International Joint Conference on Neural Networks 2014 |
---|---|
Abbreviated title | IJCNN 2014 |
Country/Territory | China |
City | Beijing |
Period | 6/07/14 → 11/07/14 |
Internet address |