A Survey on Deep Learning and Explainability for Automatic Image-based Medical Report Generation
Messina, Pablo, Pino, Pablo, Parra, Denis, Soto, Alvaro, Besa, Cecilia, Uribe, Sergio, andía, Marcelo, Tejos, Cristian, Prieto, Claudia, Capurro, Daniel
–arXiv.org Artificial Intelligence
Research over the last five years shows a clear improvement in computer-aided detection (CAD), specifically in disease prediction from medical images [36, 61, 105, 130, 134] as well as from Electronic Health Records (EHR) [113], by using deep neural networks (DNN) and treating the problem as supervised classification or segmentation tasks. Recently, Topol [129] indicates that the need for diagnosis and reporting from image-based examinations far exceeds the current medical capacity of physicians in the US. This situation promotes the development of automatic image-based diagnosis as well as automatic reporting. Furthermore, the lack of specialist physicians is even more critical in resource-limited countries [111], and therefore the expected impacts of this technology would become even more relevant. However, the elaboration of high-quality medical reports from medical images, such as chest X-rays, computed tomography (CT) or magnetic resonance (MRI) scans, is a task that requires a trained radiologist with years of experience.
arXiv.org Artificial Intelligence
Oct-20-2020
- Country:
- Asia (1.00)
- Europe > United Kingdom
- Scotland (0.28)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- California > San Francisco County
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.67)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Health Care Technology > Medical Record (1.00)
- Health & Medicine
- Technology: