Zero-shot learning via category-specific visual-semantic mapping and label refinement

Li Niu, Jianfei Cai, Ashok Veeraraghavan, Liqing Zhang

Research output: Contribution to journalArticleResearchpeer-review

24 Citations (Scopus)

Abstract

Zero-shot learning (ZSL) aims to classify a test instance from an unseen category based on the training instances from seen categories in which the gap between seen categories and unseen categories is generally bridged via visual-semantic mapping between the low-level visual feature space and the intermediate semantic space. However, the visual-semantic mapping (i.e., projection) learnt based on seen categories may not generalize well to unseen categories, which is known as the projection domain shift in ZSL. To address this projection domain shift issue, we propose a method named adaptive embedding ZSL (AEZSL) to learn an adaptive visual-semantic mapping for each unseen category, followed by progressive label refinement. Moreover, to avoid learning visual-semantic mapping for each unseen category in the large-scale classification task, we additionally propose a deep adaptive embedding model named deep AEZSL sharing the similar idea (i.e., visual-semantic mapping should be category specific and related to the semantic space) with AEZSL, which only needs to be trained once, but can be applied to arbitrary number of unseen categories. Extensive experiments demonstrate that our proposed methods achieve the state-of-the-art results for image classification on three small-scale benchmark datasets and one large-scale benchmark dataset.

Original languageEnglish
Article number8476580
Pages (from-to)965-979
Number of pages15
JournalIEEE Transactions on Image Processing
Volume28
Issue number2
DOIs
Publication statusPublished - Feb 2019
Externally publishedYes

Keywords

  • domain adaptation
  • Zero-shot learning (ZSL)

Cite this