A group of researchers from Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School (HMS) have developed a way to train artificial intelligence to read and interpret pathology images. Scientists tested the artificial intelligence (AI) during a competition at the annual International Symposium of Biomedical Imaging, where it was tasked to look for breast cancer in images of lymph nodes. It turns out it can detect breast cancer accurately 92 percent of the time and won in two separate categories during the contest. Andrew Beck from BIDMC says they used the deep learning method, which is commonly used to train AI to recognize speech, images and objects. They fed the machine with hundreds of slides marked to indicate which parts have cancerous cells and which have normal ones.
Pathologists agreed just three-quarters of the time when diagnosing breast cancer from biopsy specimens, according to a recent study. The difficult, time-consuming process of analyzing tissue slides is why pathology is one of the most expensive departments in any hospital. Faisal Mahmood, assistant professor of pathology at Harvard Medical School and the Brigham and Women's Hospital, leads a team developing deep learning tools that combine a variety of sources -- digital whole slide histopathology data, molecular information, and genomics -- to aid pathologists and improve the accuracy of cancer diagnosis. Mahmood, who heads his eponymous Mahmood Lab in the Division of Computational Pathology at Brigham and Women's Hospital, spoke this week about this research at GTC DC, the Washington edition of our GPU Technology Conference. The variability in pathologists' diagnosis "can have dire consequences, because an uncertain determination can lead to more biopsies and unnecessary interventional procedures," he said in a recent interview.
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combines second-order statistics about the data with the logarithmic behavior" of multiplicative/dual-norm algorithms. An initial theoretical analysis is provided suggesting that our algorithm might be viewed as a standard Perceptron algorithm operating on a transformed sequence of examples with improved margin properties. We also report on experiments carried out on datasets from diverse domains, with the goal of comparing to known Perceptron algorithms (first-order, second-order, additive, multiplicative). Our learning procedure seems to generalize quite well, and converges faster than the corresponding multiplicative baseline algorithms."
Reading and understanding text is one important component in computer aided diagnosis in clinical medicine, also being a major research problem in the field of NLP. In this work, we introduce a question-answering task called MedQA to study answering questions in clinical medicine using knowledge in a large-scale document collection. The aim of MedQA is to answer real-world questions with large-scale reading comprehension. We propose our solution SeaReader---a modular end-to-end reading comprehension model based on LSTM networks and dual-path attention architecture. The novel dual-path attention models information flow from two perspectives and has the ability to simultaneously read individual documents and integrate information across multiple documents. In experiments our SeaReader achieved a large increase in accuracy on MedQA over competing models. Additionally, we develop a series of novel techniques to demonstrate the interpretation of the question answering process in SeaReader.