Quality-guided fusion-based co-saliency estimation for image co-segmentation and colocalization

Koteswar Rao Jerripothula, Jianfei Cai, Junsong Yuan

Research output: Contribution to journalArticleResearchpeer-review

8 Citations (Scopus)

Abstract

Despite the advantage of exploiting interimage information by performing joint processing of images for co-saliency, co-segmentation, or co-localization, it introduces a few drawbacks: 1) its necessity in scenarios where the joint processing might not perform better than individual image processing; 2) increased complexity over individual image processing; and 3) complex parameter tuning. In this paper, we propose a simple cosaliency estimation method where we fuse saliency maps of different images using the dense correspondence technique. More important, the co-saliency estimation is guided by our proposed quality measurement that helps decide whether the saliency fusion really improves the quality of the saliency map or not. Our basic idea for developing the quality metric is that a high-quality saliency map should have well-separated foreground and background, as well as a concentrated foreground like ground-truths. Extensive experiments on several benchmark datasets including the large-scale dataset, ImageNet, for the applications of foreground co-segmentation and co-localization show that our proposed framework is able to achieve very competitive results.

Original languageEnglish
Article number8269367
Pages (from-to)2466-2477
Number of pages12
JournalIEEE Transactions on Multimedia
Volume20
Issue number9
DOIs
Publication statusPublished - Sep 2018
Externally publishedYes

Keywords

  • co-localization
  • Co-saliency
  • co-segmentation
  • foreground
  • fusion
  • quality

Cite this