Goto

Collaborating Authors


@Radiology_AI

#artificialintelligence

To develop a proof-of-concept convolutional neural network (CNN) to synthesize T2 maps in right lateral femoral condyle articular cartilage from anatomic MR images by using a conditional generative adversarial network (cGAN). In this retrospective study, anatomic images (from turbo spin-echo and double-echo in steady-state scans) of the right knee of 4621 patients included in the 2004–2006 Osteoarthritis Initiative were used as input to a cGAN-based CNN, and a predicted CNN T2 was generated as output. These patients included men and women of all ethnicities, aged 45–79 years, with or at high risk for knee osteoarthritis incidence or progression who were recruited at four separate centers in the United States. These data were split into 3703 (80%) for training, 462 (10%) for validation, and 456 (10%) for testing. Linear regression analysis was performed between the multiecho spin-echo (MESE) and CNN T2 in the test dataset.


5 barriers to recognizing the full potential of deep learning

#artificialintelligence

Deep learning is an important element of AI that's helping advance diagnostics and treatment, but it also remains relatively uncharted territory. The most successful applications of the tech to date have been in medical imaging, first author Fei Wang, PhD, and colleagues at Weill Cornell Medicine in New York, wrote in JAMA Internal Medicine. Other applications of the AI tech are vast, but scientists are still facing significant barriers. "The potential of deep learning to disentangle complex, subtle discriminative patterns in images suggests that these techniques may be useful in other areas of medicine," Wang and co-authors said. "Substantial challenges must be addressed, however, before deep learning can be applied more broadly."


Transfusion: Understanding Transfer Learning for Medical Imaging

Neural Information Processing Systems

Transfer learning from natural image datasets, particularly ImageNet, using standard large models and corresponding pretrained weights has become a de-facto method for deep learning applications to medical imaging. However, there are fundamental differences in data sizes, features and task specifications between natural image classification and the target medical tasks, and there is little understanding of the effects of transfer. In this paper, we explore properties of transfer learning for medical imaging. A performance evaluation on two large scale medical imaging tasks shows that surprisingly, transfer offers little benefit to performance, and simple, lightweight models can perform comparably to ImageNet architectures. Investigating the learned representations and features, we find that some of the differences from transfer learning are due to the over-parametrization of standard models rather than sophisticated feature reuse.


Deepfake AI: Researchers Have Finally Found A 'Good' Way To Use It

#artificialintelligence

Deepfakes have gained a lot of negative attention recently. Be it the hugely criticized DeepNude AI app which removes clothing from pictures of women or the FakeApp that swaps the faces of celebrities with porn stars in videos. Deep-learning algorithms are excellent at detecting matching patterns in images. This capability can be used to train neural nets to detect different types of cancer in a CT scan, identify diseases in MRIs, and spot abnormalities in an x-ray. While the idea of implementing deepfake AI for medical purposes sounds great, researchers don't have enough data to train a model -- simply because of privacy concerns.