Image co-segmentation via saliency co-fusion

Koteswar Rao Jerripothula, Jianfei Cai, Junsong Yuan

Research output: Contribution to journalArticleResearchpeer-review

129 Citations (Scopus)

Abstract

Most existing high-performance co-segmentation algorithms are usually complex due to the way of co-labeling a set of images as well as the common need of fine-tuning few parameters for effective co-segmentation. In this paper, instead of following the conventional way of co-labeling multiple images, we propose to first exploit inter-image information through co-saliency, and then perform single-image segmentation on each individual image. To make the system robust and to avoid heavy dependence on one single saliency extraction method, we propose to apply multiple existing saliency extraction methods on each image to obtain diverse salient maps. Our major contribution lies in the proposed method that fuses the obtained diverse saliency maps by exploiting the inter-image information, which we call saliency co-fusion. Experiments on five benchmark datasets with eight saliency extraction methods show that our saliency co-fusion-based approach achieves competitive performance even without parameter fine-tuning when compared with the state-of-the-art methods.

Original languageEnglish
Pages (from-to)1896-1909
Number of pages14
JournalIEEE Transactions on Multimedia
Volume18
Issue number9
DOIs
Publication statusPublished - Sept 2016
Externally publishedYes

Keywords

  • Co-fusion
  • co-saliency
  • co-segmentation
  • fusion
  • saliency
  • segmentation

Cite this