A theoretically sound upper bound on the triplet loss for improving the efficiency of deep distance metric learning

Thanh-Toan Do, Toan Tran, Ian Reid, Vijay Kumar, Tuan Hoang, Gustavo Carneiro

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

7 Citations (Scopus)

Abstract

We propose a method that substantially improves the efficiency of deep distance metric learning based on the optimization of the triplet loss function. One epoch of such training process based on a naive optimization of the triplet loss function has a run-time complexity O(N 3), where N is the number of training samples. Such optimization scales poorly, and the most common approach proposed to address this high complexity issue is based on sub-sampling the set of triplets needed for the training process. Another approach explored in the field relies on an ad-hoc linearization (in terms of N) of the triplet loss that introduces class centroids, which must be optimized using the whole training set for each mini-batch-this means that a naive implementation of this approach has run-time complexity O(N 2). This complexity issue is usually mitigated with poor, but computationally cheap, approximate centroid optimization methods. In this paper, we first propose a solid theory on the linearization of the triplet loss with the use of class centroids, where the main conclusion is that our new linear loss represents a tight upper-bound to the triplet loss. Furthermore, based on the theory above, we propose a training algorithm that no longer requires the centroid optimization step, which means that our approach is the first in the field with a guaranteed linear run-time complexity. We show that the training of deep distance metric learning methods using the proposed upper-bound is substantially faster than triplet-based methods, while producing competitive retrieval accuracy results on benchmark datasets (CUB-200-2011 and CAR196).

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
EditorsAbhinav Gupta, Derek Hoiem, Gang Hua, Zhuowen Tu
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages10396-10405
Number of pages10
ISBN (Electronic)9781728132938
ISBN (Print)9781728132945
DOIs
Publication statusPublished - 2019
Externally publishedYes
EventIEEE Conference on Computer Vision and Pattern Recognition 2019 - Long Beach, United States of America
Duration: 16 Jun 201920 Jun 2019
Conference number: 32nd
http://cvpr2019.thecvf.com/
https://ieeexplore.ieee.org/xpl/conhome/8938205/proceeding (Proceedings)

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
PublisherIEEE, Institute of Electrical and Electronics Engineers
Volume2019-June
ISSN (Print)1063-6919
ISSN (Electronic)2575-7075

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2019
Abbreviated titleCVPR 2019
CountryUnited States of America
CityLong Beach
Period16/06/1920/06/19
Internet address

Keywords

  • Categorization
  • Deep Learning
  • Recognition: Detection
  • Representation Learning
  • Retrieval

Cite this