Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images

Zongyuan Ge, Sergey Demyanov, Rajib Chakravorty, Adrian Bowling, Rahil Garnavi

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

Skin cancer is the most common cancer world-wide, among which Melanoma the most fatal cancer, accounts for more than 10,000 deaths annually in Australia and United States. The 5-year survival rate for Melanoma can be increased over 90% if detected in its early stage. However, intrinsic visual similarity across various skin conditions makes the diagnosis challenging both for clinicians and automated classification methods. Many automated skin cancer diagnostic systems have been proposed in literature, all of which consider solely dermoscopy images in their analysis. In reality, however, clinicians consider two modalities of imaging; an initial screening using clinical photography images to capture a macro view of the mole, followed by dermoscopy imaging which visualizes morphological structures within the skin lesion. Evidences show that these two modalities provide complementary visual features that can empower the decision making process. In this work, we propose a novel deep convolutional neural network (DCNN) architecture along with a saliency feature descriptor to capture discriminative features of the two modalities for skin lesions classification. The proposed DCNN accepts a pair images of clinical and dermoscopic view of a single lesion and is capable of learning single-modality and cross-modality representations, simultaneously. Using one of the largest collected skin lesion datasets, we demonstrate that the proposed multi-modality method significantly outperforms single-modality methods on three tasks; differentiation between 15 various skin diseases, distinguishing cancerous (3 cancer types including melanoma) from non-cancerous moles, and detecting melanoma from benign cases.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention − MICCAI 2017
Subtitle of host publication20th International Conference, Quebec City, QC, Canada, September 11–13, 2017 Proceedings, Part III
EditorsMaxime Descoteaux, Lena Maier-Hein, Alfred Franz, D. Louis Collins, Simon Duchesne
Place of PublicationCham Switzerland
PublisherSpringer
Pages250-258
Number of pages9
ISBN (Electronic)9783319661797
ISBN (Print)9783319661780
DOIs
Publication statusPublished - 1 Jan 2017
Externally publishedYes
EventMedical Image Computing and Computer-Assisted Intervention 2017 - Quebec City Convention Centre, Quebec, Canada
Duration: 10 Sep 201714 Sep 2017
Conference number: 20th
http://www.miccai2017.org/

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer
Volume10435 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceMedical Image Computing and Computer-Assisted Intervention 2017
Abbreviated titleMICCAI 2017
CountryCanada
CityQuebec
Period10/09/1714/09/17
Internet address

Cite this

Ge, Z., Demyanov, S., Chakravorty, R., Bowling, A., & Garnavi, R. (2017). Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images. In M. Descoteaux, L. Maier-Hein, A. Franz, D. L. Collins, & S. Duchesne (Eds.), Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 : 20th International Conference, Quebec City, QC, Canada, September 11–13, 2017 Proceedings, Part III (pp. 250-258). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10435 LNCS). Cham Switzerland: Springer. https://doi.org/10.1007/978-3-319-66179-7_29
Ge, Zongyuan ; Demyanov, Sergey ; Chakravorty, Rajib ; Bowling, Adrian ; Garnavi, Rahil. / Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 : 20th International Conference, Quebec City, QC, Canada, September 11–13, 2017 Proceedings, Part III. editor / Maxime Descoteaux ; Lena Maier-Hein ; Alfred Franz ; D. Louis Collins ; Simon Duchesne. Cham Switzerland : Springer, 2017. pp. 250-258 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{14741a92c7034f2ba921a9ae6c4acc84,
title = "Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images",
abstract = "Skin cancer is the most common cancer world-wide, among which Melanoma the most fatal cancer, accounts for more than 10,000 deaths annually in Australia and United States. The 5-year survival rate for Melanoma can be increased over 90{\%} if detected in its early stage. However, intrinsic visual similarity across various skin conditions makes the diagnosis challenging both for clinicians and automated classification methods. Many automated skin cancer diagnostic systems have been proposed in literature, all of which consider solely dermoscopy images in their analysis. In reality, however, clinicians consider two modalities of imaging; an initial screening using clinical photography images to capture a macro view of the mole, followed by dermoscopy imaging which visualizes morphological structures within the skin lesion. Evidences show that these two modalities provide complementary visual features that can empower the decision making process. In this work, we propose a novel deep convolutional neural network (DCNN) architecture along with a saliency feature descriptor to capture discriminative features of the two modalities for skin lesions classification. The proposed DCNN accepts a pair images of clinical and dermoscopic view of a single lesion and is capable of learning single-modality and cross-modality representations, simultaneously. Using one of the largest collected skin lesion datasets, we demonstrate that the proposed multi-modality method significantly outperforms single-modality methods on three tasks; differentiation between 15 various skin diseases, distinguishing cancerous (3 cancer types including melanoma) from non-cancerous moles, and detecting melanoma from benign cases.",
author = "Zongyuan Ge and Sergey Demyanov and Rajib Chakravorty and Adrian Bowling and Rahil Garnavi",
year = "2017",
month = "1",
day = "1",
doi = "10.1007/978-3-319-66179-7_29",
language = "English",
isbn = "9783319661780",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "250--258",
editor = "Maxime Descoteaux and Lena Maier-Hein and Alfred Franz and Collins, {D. Louis} and Simon Duchesne",
booktitle = "Medical Image Computing and Computer Assisted Intervention − MICCAI 2017",

}

Ge, Z, Demyanov, S, Chakravorty, R, Bowling, A & Garnavi, R 2017, Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images. in M Descoteaux, L Maier-Hein, A Franz, DL Collins & S Duchesne (eds), Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 : 20th International Conference, Quebec City, QC, Canada, September 11–13, 2017 Proceedings, Part III. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10435 LNCS, Springer, Cham Switzerland, pp. 250-258, Medical Image Computing and Computer-Assisted Intervention 2017, Quebec, Canada, 10/09/17. https://doi.org/10.1007/978-3-319-66179-7_29

Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images. / Ge, Zongyuan; Demyanov, Sergey; Chakravorty, Rajib; Bowling, Adrian; Garnavi, Rahil.

Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 : 20th International Conference, Quebec City, QC, Canada, September 11–13, 2017 Proceedings, Part III. ed. / Maxime Descoteaux; Lena Maier-Hein; Alfred Franz; D. Louis Collins; Simon Duchesne. Cham Switzerland : Springer, 2017. p. 250-258 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10435 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

TY - GEN

T1 - Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images

AU - Ge, Zongyuan

AU - Demyanov, Sergey

AU - Chakravorty, Rajib

AU - Bowling, Adrian

AU - Garnavi, Rahil

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Skin cancer is the most common cancer world-wide, among which Melanoma the most fatal cancer, accounts for more than 10,000 deaths annually in Australia and United States. The 5-year survival rate for Melanoma can be increased over 90% if detected in its early stage. However, intrinsic visual similarity across various skin conditions makes the diagnosis challenging both for clinicians and automated classification methods. Many automated skin cancer diagnostic systems have been proposed in literature, all of which consider solely dermoscopy images in their analysis. In reality, however, clinicians consider two modalities of imaging; an initial screening using clinical photography images to capture a macro view of the mole, followed by dermoscopy imaging which visualizes morphological structures within the skin lesion. Evidences show that these two modalities provide complementary visual features that can empower the decision making process. In this work, we propose a novel deep convolutional neural network (DCNN) architecture along with a saliency feature descriptor to capture discriminative features of the two modalities for skin lesions classification. The proposed DCNN accepts a pair images of clinical and dermoscopic view of a single lesion and is capable of learning single-modality and cross-modality representations, simultaneously. Using one of the largest collected skin lesion datasets, we demonstrate that the proposed multi-modality method significantly outperforms single-modality methods on three tasks; differentiation between 15 various skin diseases, distinguishing cancerous (3 cancer types including melanoma) from non-cancerous moles, and detecting melanoma from benign cases.

AB - Skin cancer is the most common cancer world-wide, among which Melanoma the most fatal cancer, accounts for more than 10,000 deaths annually in Australia and United States. The 5-year survival rate for Melanoma can be increased over 90% if detected in its early stage. However, intrinsic visual similarity across various skin conditions makes the diagnosis challenging both for clinicians and automated classification methods. Many automated skin cancer diagnostic systems have been proposed in literature, all of which consider solely dermoscopy images in their analysis. In reality, however, clinicians consider two modalities of imaging; an initial screening using clinical photography images to capture a macro view of the mole, followed by dermoscopy imaging which visualizes morphological structures within the skin lesion. Evidences show that these two modalities provide complementary visual features that can empower the decision making process. In this work, we propose a novel deep convolutional neural network (DCNN) architecture along with a saliency feature descriptor to capture discriminative features of the two modalities for skin lesions classification. The proposed DCNN accepts a pair images of clinical and dermoscopic view of a single lesion and is capable of learning single-modality and cross-modality representations, simultaneously. Using one of the largest collected skin lesion datasets, we demonstrate that the proposed multi-modality method significantly outperforms single-modality methods on three tasks; differentiation between 15 various skin diseases, distinguishing cancerous (3 cancer types including melanoma) from non-cancerous moles, and detecting melanoma from benign cases.

UR - http://www.scopus.com/inward/record.url?scp=85029517415&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-66179-7_29

DO - 10.1007/978-3-319-66179-7_29

M3 - Conference Paper

SN - 9783319661780

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 250

EP - 258

BT - Medical Image Computing and Computer Assisted Intervention − MICCAI 2017

A2 - Descoteaux, Maxime

A2 - Maier-Hein, Lena

A2 - Franz, Alfred

A2 - Collins, D. Louis

A2 - Duchesne, Simon

PB - Springer

CY - Cham Switzerland

ER -

Ge Z, Demyanov S, Chakravorty R, Bowling A, Garnavi R. Skin disease recognition using deep saliency features and multimodal learning of dermoscopy and clinical images. In Descoteaux M, Maier-Hein L, Franz A, Collins DL, Duchesne S, editors, Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 : 20th International Conference, Quebec City, QC, Canada, September 11–13, 2017 Proceedings, Part III. Cham Switzerland: Springer. 2017. p. 250-258. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-66179-7_29