Explainable, trustworthy, and ethical machine learning for healthcare: A survey

Khansa Rasheed, Adnan Qayyum, Mohammed Ghaly, Ala Al-Fuqaha, Adeel Razi, Junaid Qadir

Research output: Contribution to journalReview ArticleResearchpeer-review

53 Citations (Scopus)


With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.

Original languageEnglish
Article number106043
Number of pages23
JournalComputers in Biology and Medicine
Publication statusPublished - Oct 2022


  • Explainable machine learning
  • Healthcare
  • Interpretable machine learning
  • Trustworthiness

Cite this