TY - JOUR
T1 - Skin lesion segmentation via generative adversarial networks with dual discriminators
AU - Lei, Baiying
AU - Xia, Zaimin
AU - Jiang, Feng
AU - Jiang, Xudong
AU - Ge, Zongyuan
AU - Xu, Yanwu
AU - Qin, Jing
AU - Chen, Siping
AU - Wang, Tianfu
AU - Wang, Shuqiang
PY - 2020/8
Y1 - 2020/8
N2 - Skin lesion segmentation from dermoscopy images is a fundamental yet challenging task in the computer-aided skin diagnosis system due to the large variations in terms of their views and scales of lesion areas. We propose a novel and effective generative adversarial network (GAN) to meet these challenges. Specifically, this network architecture integrates two modules: a skip connection and dense convolution U-Net (UNet-SCDC) based segmentation module and a dual discrimination (DD) module. While the UNet-SCDC module uses dense dilated convolution blocks to generate a deep representation that preserves fine-grained information, the DD module makes use of two discriminators to jointly decide whether the input of the discriminators is real or fake. While one discriminator, with a traditional adversarial loss, focuses on the differences at the boundaries of the generated segmentation masks and the ground truths, the other examines the contextual environment of target object in the original image using a conditional discriminative loss. We integrate these two modules and train the proposed GAN in an end-to-end manner. The proposed GAN is evaluated on the public International Skin Imaging Collaboration (ISIC) Skin Lesion Challenge Datasets of 2017 and 2018. Extensive experimental results demonstrate that the proposed network achieves superior segmentation performance to state-of-the-art methods.
AB - Skin lesion segmentation from dermoscopy images is a fundamental yet challenging task in the computer-aided skin diagnosis system due to the large variations in terms of their views and scales of lesion areas. We propose a novel and effective generative adversarial network (GAN) to meet these challenges. Specifically, this network architecture integrates two modules: a skip connection and dense convolution U-Net (UNet-SCDC) based segmentation module and a dual discrimination (DD) module. While the UNet-SCDC module uses dense dilated convolution blocks to generate a deep representation that preserves fine-grained information, the DD module makes use of two discriminators to jointly decide whether the input of the discriminators is real or fake. While one discriminator, with a traditional adversarial loss, focuses on the differences at the boundaries of the generated segmentation masks and the ground truths, the other examines the contextual environment of target object in the original image using a conditional discriminative loss. We integrate these two modules and train the proposed GAN in an end-to-end manner. The proposed GAN is evaluated on the public International Skin Imaging Collaboration (ISIC) Skin Lesion Challenge Datasets of 2017 and 2018. Extensive experimental results demonstrate that the proposed network achieves superior segmentation performance to state-of-the-art methods.
KW - Dense convolution U-Net
KW - Dual discriminators
KW - Generative adversarial network
KW - Skin lesion segmentation
UR - http://www.scopus.com/inward/record.url?scp=85085554310&partnerID=8YFLogxK
U2 - 10.1016/j.media.2020.101716
DO - 10.1016/j.media.2020.101716
M3 - Article
C2 - 32492581
AN - SCOPUS:85085554310
SN - 1361-8415
VL - 64
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 101716
ER -