A benchmark for semantic image segmentation

Hui Li, Jianfei Cai, Thi Nhat Anh Nguyen, Jianmin Zheng

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

30 Citations (Scopus)


Though quite a few image segmentation benchmark datasets have been constructed, there is no suitable benchmark for semantic image segmentation. In this paper, we construct a benchmark for such a purpose, where the ground-truths are generated by leveraging the existing fine granular ground-truths in Berkeley Segmentation Dataset (BSD) as well as using an interactive segmentation tool for new images. We also propose a percept-tree-based region merging strategy for dynamically adapting the ground-truth for evaluating test segmentation. Moreover, we propose a new evaluation metric that is easy to understand and compute, and does not require boundary matching. Experimental results show that, compared with BSD, the generated ground-truth dataset is more suitable for evaluating semantic image segmentation, and the conducted user study demonstrates that the proposed evaluation metric matches user ranking very well.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Multimedia and Expo, ICME 2013
PublisherIEEE, Institute of Electrical and Electronics Engineers
ISBN (Print)9781479900152
Publication statusPublished - 2013
Externally publishedYes
EventIEEE International Conference on Multimedia and Expo 2013 - Fairmont Hotel, San Jose, United States of America
Duration: 15 Jul 201319 Jul 2013
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6596168 (IEEE Conference Proceedings)

Publication series

NameProceedings - IEEE International Conference on Multimedia and Expo
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X


ConferenceIEEE International Conference on Multimedia and Expo 2013
Abbreviated titleICME 2013
Country/TerritoryUnited States of America
CitySan Jose
Internet address


  • Benchmark
  • Dataset
  • Evaluation
  • Semantic Image Segmentation

Cite this