Improving robot vision models for object detection through interaction

Jurgen Leitner, Alexander Forster, Jurgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

8 Citations (Scopus)

Abstract

We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the right action, i.e. the action with the best possible improvement of the detector.

Original languageEnglish
Title of host publicationProceedings of the International Joint Conference on Neural Networks
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages3355-3362
Number of pages8
ISBN (Electronic)9781479914845
DOIs
Publication statusPublished - 3 Sept 2014
Externally publishedYes
EventIEEE International Joint Conference on Neural Networks 2014 - Beijing, China
Duration: 6 Jul 201411 Jul 2014
https://ieeexplore.ieee.org/xpl/conhome/6880678/proceeding (Proceedings)

Conference

ConferenceIEEE International Joint Conference on Neural Networks 2014
Abbreviated titleIJCNN 2014
Country/TerritoryChina
CityBeijing
Period6/07/1411/07/14
Internet address

Cite this