A4T: Hierarchical affordance detection for transparent objects depth reconstruction and manipulation

Jiaqi Jiang, Guanqun Cao, Thanh-Toan Do, Shan Luo

Research output: Contribution to journalArticleResearchpeer-review

13 Citations (Scopus)

Abstract

Transparent objects are widely used in our daily lives and therefore robots need to be able to handle them. However, transparent objects suffer from light reflection and refraction, which makes it challenging to obtain the accurate depth maps required to perform handling tasks. In this letter, we propose a novel affordance-based framework for depth reconstruction and manipulation of transparent objects, named A4T. A hierarchical AffordanceNet is first used to detect the transparent objects and their associated affordances that encode the relative positions of an object's different parts. Then, given the predicted affordance map, a multi-step depth reconstruction method is used to progressively reconstruct the depth maps of transparent objects. Finally, the reconstructed depth maps are employed for the affordance-based manipulation of transparent objects. To evaluate our proposed method, we construct a real-world dataset TRANS-AFF with affordances and depth maps of transparent objects, which is the first of its kind. Extensive experiments show that our proposed methods can predict accurate affordance maps, and significantly improve the depth reconstruction of transparent objects compared to the state-of-the-art method, with the Root Mean Squared Error in meters significantly decreased from 0.097 to 0.042. Furthermore, we demonstrate the effectiveness of our proposed method with a series of robotic manipulation experiments on transparent objects.

Original languageEnglish
Pages (from-to)9826-9833
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume7
Issue number4
DOIs
Publication statusPublished - Oct 2022

Keywords

  • Computer vision for automation
  • robotics and automation in life sciences

Cite this