Meta-learning for multi-label few-shot classification

Christian Simon, Piotr Koniusz, Mehrtash Harandi

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

15 Citations (Scopus)


Even with the luxury of having abundant data, multi-label classification is widely known to be a challenging task to address. This work targets the problem of multi-label meta-learning, where a model learns to predict multiple labels within a query (e.g., an image) by just observing a few supporting examples. In doing so, we first propose a benchmark for Few-Shot Learning (FSL) with multiple labels per sample. Next, we discuss and extend several solutions specifically designed to address the conventional and single-label FSL, to work in the multi-label regime. Lastly, we introduce a neural module to estimate the label count of a given sample by exploiting the relational inference. We will show empirically the benefit of the label count module, the label propagation algorithm, and the extensions of conventional FSL methods on three challenging datasets, namely MS-COCO, iMaterialist, and Open MIC. Overall, our thorough experiments suggest that the proposed label-propagation algorithm in conjunction with the neural label count module (NLC) shall be considered as the method of choice.

Original languageEnglish
Title of host publicationProceedings, 2022 IEEE Winter Conference on Applications of Computer Vision, WACV 2022
EditorsEric Mortensen
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages10
ISBN (Electronic)9781665409155
Publication statusPublished - 2022
EventIEEE Winter Conference on Applications of Computer Vision 2022 - Waikoloa, United States of America
Duration: 4 Jan 20228 Jan 2022


ConferenceIEEE Winter Conference on Applications of Computer Vision 2022
Abbreviated titleWACV 2022
Country/TerritoryUnited States of America
Internet address


  • Few-shot
  • Semi- and Un- supervised Learning Deep Learning
  • Transfer

Cite this