Min-max statistical alignment for transfer learning

Samitha Herath, Mehrtash Harandi, Basura Fernando, Richard Nock

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

7 Citations (Scopus)

Abstract

A profound idea in learning invariant features for transfer learning is to align statistical properties of the domains. In practice, this is achieved by minimizing the disparity between the domains, usually measured in terms of their statistical properties. We question the capability of this school of thought and propose to minimize the maximum disparity between domains. Furthermore, we develop an end-to-end learning scheme that enables us to benefit from the proposed min-max strategy in training deep models. We show that the min-max solution can outperform the existing statistical alignment solutions, and can compete with state-of-the-art solutions on two challenging learning tasks, namely, Unsupervised Domain Adaptation (UDA) and Zero-Shot Learning (ZSL).

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
EditorsAbhinav Gupta, Derek Hoiem, Gang Hua, Zhuowen Tu
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages9280-9289
Number of pages10
ISBN (Electronic)9781728132938
ISBN (Print)9781728132945
DOIs
Publication statusPublished - 2019
Externally publishedYes
EventIEEE Conference on Computer Vision and Pattern Recognition 2019 - Long Beach, United States of America
Duration: 16 Jun 201920 Jun 2019
Conference number: 32nd
http://cvpr2019.thecvf.com/
https://ieeexplore.ieee.org/xpl/conhome/8938205/proceeding (Proceedings)

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2019
Abbreviated titleCVPR 2019
Country/TerritoryUnited States of America
CityLong Beach
Period16/06/1920/06/19
Internet address

Keywords

  • Categorization
  • Recognition: Detection
  • Representation Learning
  • Retrieval

Cite this