How trustworthy are performance evaluations for basic vision tasks?

Tran Thien Dat Nguyen, Hamid Rezatofighi, Ba-Ngu Vo, Ba-Tuong Vo, Silvio Savarese, Ian Reid

Research output: Contribution to journalArticleResearchpeer-review

7 Citations (Scopus)

Abstract

This paper examines performance evaluation criteria for basic vision tasks involving sets of objects namely, object detection, instance-level segmentation and multi-object tracking. The rankings of algorithms by an existing criterion can fluctuate with different choices of parameters, e.g. Intersection over Union (IoU) threshold, making their evaluations unreliable. More importantly, there is no means to verify whether we can trust the evaluations of a criterion. This work suggests a notion of trustworthiness for performance criteria, which requires (i) robustness to parameters for reliability, (ii) contextual meaningfulness in sanity tests, and (iii) consistency with mathematical requirements such as the metric properties. We observe that these requirements were overlooked by many widely-used criteria, and explore alternative criteria using metrics for sets of shapes. We also assess all these criteria based on the suggested requirements for trustworthiness.

Original languageEnglish
Pages (from-to)8538-8552
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume45
Issue number7
DOIs
Publication statusPublished - 1 Jul 2023

Keywords

  • Detectors
  • Instance-level segmentation
  • Measurement
  • metric
  • multi-object tracking
  • object detection
  • Object detection
  • Performance evaluation
  • performance evaluation
  • Reliability
  • Shape
  • Task analysis

Cite this