If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The FDA has been championing digital health of late with wide-ranging guidance that derives from the 21st Century Cures Act. This legislation acknowledges the potential that digital health has to make a difference in patient care, potentially leading to more precise therapies. Several developments this week show that the regulator is right to be excited about its potential. Some of the most exciting advances have come in the field of cancer – medical devices firm Angle has produced a new analysis showing that its liquid biopsy device Parsortix could be used instead of conventional tissue biopsies. Parsortix works by monitoring a patient's bloodstream for circulating cancer cells and the University of Southern California research adds to the body of evidence showing that liquid biopsies could replace invasive and unpleasant tissue biopsies in the future.
During a 2016 simulation exercise, researchers evaluated the ability of 32 different deep learning algorithms to detect lymph node metastases in patients with breast cancer. Each algorithm's performance was then compared to that of a panel of 11 pathologists with time constraint (WTC). Overall, the team found that seven of the algorithms outperformed the panel of pathologists, publishing an in-depth analysis in JAMA. "To our knowledge, this is the first study that shows that interpretation of pathology images can be performed by deep learning algorithms at an accuracy level that rivals human performance," wrote lead author Babak Ehteshami Bejnordi, MS, Radboud University Medical Center in Nijmegen, the Netherlands, and colleagues. The simulation took place during the Cancer Metastases in Lymph Nodes Challenge 2016 (CAMELYON16) in the Netherlands.
In a spotlight paper from the 2017 NIPS Conference, my team and I presented an AI optimization framework we call Net-Trim, which is a layer-wise convex scheme to prune a pre-trained deep neural network. Deep learning has become a method of choice for many AI applications, ranging from image recognition to language translation. Thanks to algorithmic and computational advances, we are now able to train bigger and deeper neural networks resulting in increased AI accuracy. However, because of increased power consumption and memory usage, it is impractical to deploy such models on embedded devices with limited hardware resources and power constraints. One practical way to overcome this challenge is to reduce the model complexity without sacrificing accuracy.
Babak Ehteshami Bejnordi, from the Radboud University Medical Center in Nijmegen, Netherlands, and colleagues compared the performance of automated deep learning algorithms for detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer with pathologists' diagnoses in a diagnostic setting. The researchers found that the area under the receiver operating characteristic curve (AUC) ranged from 0.556 to 0.994 for the algorithms. The lesion-level, true-positive fraction achieved for the top-performing algorithm was comparable to that of the pathologist without a time constraint at a mean of 0.0125 false-positives per normal whole-slide image. Daniel Shu Wei Ting, M.D., Ph.D., from the Singapore National Eye Center, and colleagues assessed the performance of a DLS for detecting referable diabetic retinopathy and related eye diseases using 494,661 retinal images. The researchers found that the AUC of the DLS for referable diabetic retinopathy was 0.936, and sensitivity and specificity were 90.5 and 91.6 percent, respectively.
A deep learning algorithm can detect metastases in sections of lymph nodes from women with breast cancer; and a deep learning system (DLS) has high sensitivity and specificity for identifying diabetic retinopathy, according to two studies published online December 12 in the Journal of the American Medical Association. Babak Ehteshami Bejnordi, from the Radboud University Medical Center in Nijmegen, Netherlands, and colleagues compared the performance of automated deep learning algorithms for detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer with pathologists' diagnoses in a diagnostic setting. The researchers found that the area under the receiver operating characteristic curve (AUC) ranged from 0.556 to 0.994 for the algorithms. The lesion-level, true-positive fraction achieved for the top-performing algorithm was comparable to that of the pathologist without a time constraint at a mean of 0.0125 false-positives per normal whole-slide image. Daniel Shu Wei Ting, MD, PhD, from the Singapore National Eye Center, and colleagues assessed the performance of a DLS for detecting referable diabetic retinopathy and related eye diseases using 494,661 retinal images.
Free-text radiology reports can be automatically classified by convolutional neural networks (CNNs) powered by deep-learning algorithms with accuracy that's equal to or better than that achieved by traditional--and decidedly labor-intensive--natural language processing (NLP) methods. That's the conclusion of researchers led by Matthew Lungren, MD, MPH, of Stanford University. The team tested a CNN model they developed for mining pulmonary-embolism findings from thoracic CT reports generated at two institutions. Radiology published their study, lead-authored by Matthew Chen, MS, also of Stanford, online Nov. 13. The researchers analyzed annotations made by two radiologists for the presence, chronicity and location of pulmonary embolisms, then compared their CNN's performance with that of an NLP model considered quite proficient in this task, called PeFinder.
GE Healthcare is set to speed up the time taken to process medical images, thanks to a pair of partnerships announced on Sunday. The global giant will team up with Nvidia to update its 500,000 medical imaging devices worldwide with Revolution Frontier CT, which is claimed to be two times faster than the previous generation image processor. GE said the speedier Revolution Frontier would be better at liver lesion detection and kidney lesion characterisation, and has the potential to reduce the number of follow-up appointments and the number of non-interpretable scans. GE Healthcare is also making use of Nvidia in its new analytics platform, with sections of it to be placed in the Nvidia GPU Cloud. An average hospital generates 50 petabytes of data annually, GE said, but only 3 percent of that data is analysed, tagged, or made actionable.
Guest blog post by Kenneth Soo, originally posted here. One should be able to tell that it is a giraffe, despite it being strangely fat. We recognize images and objects instantly, even if these images are presented in a form that is different from what we have seen before. We do this with the 80 billion neurons in our brain working together to transmit information. This remarkable system of neurons is also the inspiration behind a widely-used machine learning technique called Artificial Neural Networks (ANN).
An algorithm developed by researchers at Stanford University proved more effective than human radiologists in diagnosing cases of pneumonia. Much research has been shared on the potential of Artificial Intelligence applied to medicine, and in some cases, can reach a level of accuracy that exceeds the performance of professionals. Following this line, Stanford researchers published a document on CheXNet, the convolutional neuronal network, which they developed with the ability to detect pneumonia symptoms. To do this, he uses the traditional method, chest radiographs. It works with 112,120 images of chest X-rays referring to 14 types of diseases.
Two popular video games act like IQ tests, with the most intelligent players gaining the highest scores, research has shown. Both games, League of Legends and Defence of the Ancients 2 (DOTA 2) involve chess-like strategic thinking. Scientists discovered that high levels of skill in both games correlated with having a high IQ. A similar association has been seen between IQ and chess performance. Two popular video games act like IQ tests, with the most intelligent players gaining the highest scores, research has shown.