A Survey on Deep Learning and Explainability for Automatic Image-based Medical Report Generation

Messina, Pablo, Pino, Pablo, Parra, Denis, Soto, Alvaro, Besa, Cecilia, Uribe, Sergio, andía, Marcelo, Tejos, Cristian, Prieto, Claudia, Capurro, Daniel

arXiv.org Artificial Intelligence 

Research over the last five years shows a clear improvement in computer-aided detection (CAD), specifically in disease prediction from medical images [36, 61, 105, 130, 134] as well as from Electronic Health Records (EHR) [113], by using deep neural networks (DNN) and treating the problem as supervised classification or segmentation tasks. Recently, Topol [129] indicates that the need for diagnosis and reporting from image-based examinations far exceeds the current medical capacity of physicians in the US. This situation promotes the development of automatic image-based diagnosis as well as automatic reporting. Furthermore, the lack of specialist physicians is even more critical in resource-limited countries [111], and therefore the expected impacts of this technology would become even more relevant. However, the elaboration of high-quality medical reports from medical images, such as chest X-rays, computed tomography (CT) or magnetic resonance (MRI) scans, is a task that requires a trained radiologist with years of experience.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found