Multi-modal unsupervised feature learning for RGB-D scene labeling

Anran Wang, Jiwen Lu, Gang Wang, Jianfei Cai, Tat-Jen Cham

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

31 Citations (Scopus)


Most of the existing approaches for RGB-D indoor scene labeling employ hand-crafted features for each modality independently and combine them in a heuristic manner. There has been some attempt on directly learning features from raw RGB-D data, but the performance is not satisfactory. In this paper, we adapt the unsupervised feature learning technique for RGB-D labeling as a multi-modality learning problem. Our learning framework performs feature learning and feature encoding simultaneously which significantly boosts the performance. By stacking basic learning structure, higher-level features are derived and combined with lower-level features for better representing RGB-D data. Experimental results on the benchmark NYU depth dataset show that our method achieves competitive performance, compared with state-of-the-art.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2014
Subtitle of host publication13th European Conference Zurich, Switzerland, September 6-12, 2014 Proceedings, Part V
EditorsDavid Fleet, Tomas Pajdla, Bernt Schiele, Tinne Tuytelaars
Place of PublicationCham Switzerland
Number of pages15
ISBN (Electronic)9783319106021
ISBN (Print)9783319106014
Publication statusPublished - 2014
Externally publishedYes
EventEuropean Conference on Computer Vision 2014 - Zurich, Switzerland
Duration: 6 Sep 201412 Sep 2014
Conference number: 13th (Proceedings)

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


ConferenceEuropean Conference on Computer Vision 2014
Abbreviated titleECCV 2014
Internet address


  • joint feature learning and encoding
  • multi-modality
  • RGB-D scene labeling
  • unsupervised feature learning

Cite this