Skin lesion segmentation via generative adversarial networks with dual discriminators

Baiying Lei, Zaimin Xia, Feng Jiang, Xudong Jiang, Zongyuan Ge, Yanwu Xu, Jing Qin, Siping Chen, Tianfu Wang, Shuqiang Wang

Research output: Contribution to journalArticleResearchpeer-review

190 Citations (Scopus)

Abstract

Skin lesion segmentation from dermoscopy images is a fundamental yet challenging task in the computer-aided skin diagnosis system due to the large variations in terms of their views and scales of lesion areas. We propose a novel and effective generative adversarial network (GAN) to meet these challenges. Specifically, this network architecture integrates two modules: a skip connection and dense convolution U-Net (UNet-SCDC) based segmentation module and a dual discrimination (DD) module. While the UNet-SCDC module uses dense dilated convolution blocks to generate a deep representation that preserves fine-grained information, the DD module makes use of two discriminators to jointly decide whether the input of the discriminators is real or fake. While one discriminator, with a traditional adversarial loss, focuses on the differences at the boundaries of the generated segmentation masks and the ground truths, the other examines the contextual environment of target object in the original image using a conditional discriminative loss. We integrate these two modules and train the proposed GAN in an end-to-end manner. The proposed GAN is evaluated on the public International Skin Imaging Collaboration (ISIC) Skin Lesion Challenge Datasets of 2017 and 2018. Extensive experimental results demonstrate that the proposed network achieves superior segmentation performance to state-of-the-art methods.

Original languageEnglish
Article number101716
Number of pages12
JournalMedical Image Analysis
Volume64
DOIs
Publication statusPublished - Aug 2020

Keywords

  • Dense convolution U-Net
  • Dual discriminators
  • Generative adversarial network
  • Skin lesion segmentation

Cite this