Goto

Collaborating Authors

Results


AI Hype and Radiology: A Plea for Realism and Accuracy

#artificialintelligence

This opinion piece is inspired by the old Danish proverb: "Making predictions is hard, especially about the future" (1). As every reader knows, the momentum of artificial intelligence (AI) and the eventual implementation of deep learning models seem assured. Some pundits have gone considerably further, however, and predicted a sweeping AI takeover of radiology. Although many radiologists support AI and believe it will enable greater efficiency, a recent study of medical students found very different reactions (2). While such doomsday predictions are understandably attention-grabbing, they are highly unlikely, at least in the short term.


Adversarial Attacks Against Medical Deep Learning Systems

arXiv.org Machine Learning

The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we argue that the field of medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud, we extend adversarial attacks to three popular medical imaging tasks, and we provide concrete examples of how and why such attacks could be realistically carried out. For each of our representative medical deep learning classifiers, both white and black box attacks were both effective and human-imperceptible. We urge caution in employing deep learning systems in clinical settings, and encourage research into domain-specific defense strategies.