Large-margin multi-modal deep learning for RGB-D object recognition

Anran Wang, Jiwen Lu, Jianfei Cai, Tat Jen Cham, Gang Wang

Research output: Contribution to journalArticleResearchpeer-review

146 Citations (Scopus)

Abstract

Most existing feature learning-based methods for RGB-D object recognition either combine RGB and depth data in an undifferentiated manner from the outset, or learn features from color and depth separately, which do not adequately exploit different characteristics of the two modalities or utilize the shared relationship between the modalities. In this paper, we propose a general CNN-based multi-modal learning framework for RGB-D object recognition. We first construct deep CNN layers for color and depth separately, which are then connected with a carefully designed multi-modal layer. This layer is designed to not only discover the most discriminative features for each modality, but is also able to harness the complementary relationship between the two modalities. The results of the multi-modal layer are back-propagated to update parameters of the CNN layers, and the multi-modal feature learning and the back-propagation are iteratively performed until convergence. Experimental results on two widely used RGB-D object datasets show that our method for general multi-modal learning achieves comparable performance to state-of-the-art methods specifically designed for RGB-D data.

Original languageEnglish
Article number7258382
Pages (from-to)1887-1898
Number of pages12
JournalIEEE Transactions on Multimedia
Volume17
Issue number11
DOIs
Publication statusPublished - Nov 2015
Externally publishedYes

Keywords

  • Deep learning
  • large-margin feature learning
  • multi-modality
  • RGB-D object recognition

Cite this