A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images

Pablo Messina, Pablo Pino, Denis Parra, Alvaro Soto, Cecilia Besa, Sergio A. Uribe, Marcelo Andia, Cristián Tejos, Claudia Prieto, Daniel Capurro

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Every year physicians face an increasing demand of image-based diagnosis from patients, a problem that can be addressed with recent artificial intelligence methods. In this context, we survey works in the area of automatic report generation from medical images, with emphasis on methods using deep neural networks, with respect to (1) Datasets, (2) Architecture Design, (3) Explainability, and (4) Evaluation Metrics. Our survey identifies interesting developments but also remaining challenges. Among them, the current evaluation of generated reports is especially weak, since it mostly relies on traditional Natural Language Processing (NLP) metrics, which do not accurately capture medical correctness.
Original languageEnglish
Article number203
Number of pages40
JournalACM Computing Surveys
Volume54
Issue number10S
DOIs
Publication statusPublished - Sep 2022
Externally publishedYes

Cite this