Generative metric learning for adversarially robust open-world person re-identification

Deyin Liu, Lin (Yuanbo) Wu, Richang Hong, Zongyuan Ge, Jialie Shen, Farid Boussaid, Mohammed Bennamoun

Research output: Contribution to journalArticleResearchpeer-review

15 Citations (Scopus)


The vulnerability of re-identification (re-ID) models under adversarial attacks is of significant concern as criminals may use adversarial perturbations to evade surveillance systems. Unlike a closed-world re-ID setting (i.e., a fixed number of training categories), a reliable re-ID system in the open world raises the concern of training a robust yet discriminative classifier, which still shows robustness in the context of unknown examples of an identity. In this work, we improve the robustness of open-world re-ID models by proposing a generative metric learning approach to generate adversarial examples that are regularized to produce robust distance metric. The proposed approach leverages the expressive capability of generative adversarial networks to defend the re-ID models against feature disturbance attacks. By generating the target people variants and sampling the triplet units for metric learning, our learned distance metrics are regulated to produce accurate predictions in the feature metric space. Experimental results on the three re-ID datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17 demonstrate the robustness of our method.

Original languageEnglish
Article number20
Number of pages19
JournalACM Transactions on Multimedia Computing, Communications and Applications
Issue number1
Publication statusPublished - 5 Jan 2023


  • Adversarial attack
  • generative metric learning
  • open-world person re-identification
  • robust models

Cite this