Exploring Visual Explainability in Breast Cancer Histopathological Images Malignancy Classification

Activity: Other Teaching Engagements and non-HDR SupervisionsNon-HDR Supervisions

Description

Breast cancer malignancy classification using deep learning has seen significant advancements, especially in overcoming the challenges of limited labeled data. Techniques such as transfer learning and self-supervised methods have become pivotal in this domain. Given the critical nature of medical diagnostics, ensuring clarity and transparency in the decisions made by deep neural networks is of paramount importance. This research delves into the efficacy of two notable interpretability algorithms for medical image analysis: Layer-wise Relevance Propagation (LRP) and Gradient-weighted Class Activation Mapping (Grad-CAM). Using the BreaKHis dataset, we investigate the ability of these algorithms to identify crucial regions influencing the decisions of neural networks. Preliminary results indicate that LRP offers intricate relevance mapping, while Grad-CAM illuminates more expansive influential regions. This distinction is consistent with their respective design philosophies. By employing both algorithms, our study highlights the holistic insights that can be garnered in providing a more transparent understanding for medical professionals regarding diagnostic decisions made by deep learning models and provides IT researchers with insights into model improvements by identifying sources of false results. Our research aspires to advance the development of transparent, reliable, and effective clinical decision-support tools in breast cancer diagnostics.
PeriodFeb 2023Dec 2023
Degree of RecognitionInternational

Keywords

  • istopathology Image Analysis
  • Breast Cancer Classification
  • Layer-wise Relevance Propagation (LRP),
  • Explainable AI (XAI),