Collaborating Authors


Classifying Breast Histopathology Images with a Ductal Instance-Oriented Pipeline Artificial Intelligence

In this study, we propose the Ductal Instance-Oriented Pipeline (DIOP) that contains a duct-level instance segmentation model, a tissue-level semantic segmentation model, and three-levels of features for diagnostic classification. Based on recent advancements in instance segmentation and the Mask R-CNN model, our duct-level segmenter tries to identify each ductal individual inside a microscopic image; then, it extracts tissue-level information from the identified ductal instances. Leveraging three levels of information obtained from these ductal instances and also the histopathology image, the proposed DIOP outperforms previous approaches (both feature-based and CNN-based) in all diagnostic tasks; for the four-way classification task, the DIOP achieves comparable performance to general pathologists in this unique dataset. The proposed DIOP only takes a few seconds to run in the inference time, which could be used interactively on most modern computers. More clinical explorations are needed to study the robustness and generalizability of this system in the future.

Deep learning helps determine a woman's risk of breast cancer


A new deep learning model has been created by a team of researchers at Massachusetts General Hospital (MGH). Early evidence shows that the system is more accurate at predicting a woman's risk of developing breast cancer than current risk assessment tools. Using deep learning to recognize imaging biomarkers in mammograms presents an opportunity to detect the early warning signs of breast cancer in women who may not be considered at-risk, allowing intervention strategies to begin early, potentially improving prognosis. Deep learning, a subset of machine learning, has already made big waves in the field of medicine and healthcare. It uses artificial neural networks to solve highly complex problems that are out of the realm of capabilities for traditional computers to solve.

Researchers find that deep learning model can predict breast cancer risk


Researchers have developed a deep learning model that identifies imaging biomarkers on screening mammograms to predict a patient's risk for developing breast cancer with greater accuracy than traditional risk assessment tools. Traditional risk assessment models do not leverage the level of detail that is contained within a mammogram," said study author Leslie Lamb from the Massachusetts General Hospital (MGH) in the US. "Even the best existing traditional risk models may separate sub-groups of patients but are not as precise on the individual level," Lamb added. Currently available risk assessment models incorporate only a small fraction of patient data such as family history, prior breast biopsies, and hormonal and reproductive history. Only one feature from the screening mammogram itself, breast density, is incorporated into traditional models.

Cancer image classification based on DenseNet model Machine Learning

Computer-aided diagnosis establishes methods for robust assessment of medical image-based examination. Image processing introduced a promising strategy to facilitate disease classification and detection while diminishing unnecessary expenses. In this paper, we propose a novel metastatic cancer image classification model based on DenseNet Block, which can effectively identify metastatic cancer in small image patches taken from larger digital pathology scans. We evaluate the proposed approach to the slightly modified version of the PatchCamelyon (PCam) benchmark dataset. The dataset is the slightly modified version of the PatchCamelyon (PCam) benchmark dataset provided by Kaggle competition, which packs the clinically-relevant task of metastasis detection into a straight-forward binary image classification task. The experiments indicated that our model outperformed other classical methods like Resnet34, Vgg19. Moreover, we also conducted data augmentation experiment and study the relationship between Batches processed and loss value during the training and validation process.

MIT researcher held up as model of how algorithms can benefit humanity


In June, when MIT artificial intelligence researcher Regina Barzilay went to Massachusetts General Hospital for a mammogram, her data were run through a deep learning model designed to assess her risk of developing breast cancer, which she had been diagnosed with once before. The workings of the algorithm, which predicted that her risk was low, were familiar: Barzilay helped build that very model, after being spurred by her 2014 cancer diagnosis to pivot her research to health care. Barzilay's work in AI, which ranges from tools for early cancer detection to platforms to identify new antibiotics, is increasingly garnering recognition: On Wednesday, the Association for the Advancement of Artificial Intelligence named Barzilay as the inaugural recipient of a new annual award honoring an individual developing or promoting AI for the good of society. The award comes with a $1 million prize sponsored by the Chinese education technology company Squirrel AI Learning. While there are already prizes in the AI field, notably the Turing Award for computer scientists, those existing awards are typically "more focused on scientific, technical contributions and ideas," said Yolanda Gil, a past president of AAAI and an AI researcher at the University of Southern California.

Deep learning achieves radiologist-level performance of tumor segmentation in breast MRI Machine Learning

Purpose: The goal of this research was to develop a deep network architecture that achieves fully-automated radiologist-level segmentation of breast tumors in MRI. Materials and Methods: We leveraged 38,229 clinical MRI breast exams collected retrospectively from women aged 12-94 (mean age 54) who presented between 2002 and 2014 at a single clinical site. The training set for the network consisted of 2,555 malignant breasts that were segmented in 2D by experienced radiologists, as well as 60,108 benign breasts that served as negative controls. The test set consisted of 250 exams with tumors segmented independently by four radiologists. We selected among several 3D deep convolutional neural network architectures, input modalities and harmonization methods. The outcome measure was the Dice score for 2D segmentation, and was compared between the network and radiologists using the Wilcoxon signed-rank test and the TOST procedure. Results: The best-performing network on the training set was a volumetric U-Net with contrast enhancement dynamic as input and with intensity normalized for each exam. In the test set the median Dice score of this network was 0.77. The performance of the network was equivalent to that of the radiologists (TOST procedure with radiologist performance of 0.69-0.84 as equivalence bounds: p = 5e-10 and p = 2e-5, respectively; N = 250) and compares favorably with published state of the art (0.6-0.77). Conclusion: When trained on a dataset of over 60 thousand breasts, a volumetric U-Net performs as well as expert radiologists at segmenting malignant breast lesions in MRI.

Interpretation of smartphone-captured radiographs utilizing a deep learning-based approach Artificial Intelligence

In the field of medical imaging, chest radiographs, or X-rays remain as the gold standard for interpreting lung conditions of one and play an important role in clinical care treatment. Recent years witnessed the rising remarkable success of Artificial intelligence (AI) technology in various fields such as computer vision or health services. In detection of diseases in medical images, especially in radiographs, AIbased systems have proven to be powerful tools that can handle medical challenges quickly and cheaply and thereby can significantly improve diagnostics quality and ultimately treat the disease. For examples, detection of skin cancers has been enabled by a vast number of accurate deep learning studies in 2019 such as [1] [2] or [3]. Mammography, which is usually used to detect breast cancer has been the interest of such deep learning studies [4] [5]. A recent advanced study has also been conducted on the use of deep learning to identify Appendicitis using videos that contain CT scans[6]. For radiographs, scientists also applied deep learning to detect particular conditions of lung health, such as pneumonia or consolidation, etc.... Merely, Deep Learning has proven its efficiency in a recent study to generate new synthesis data for training [7]. Some works even lead to the conclusion that AIbased systems can suppress the performance of normal medical doctors or qualified experts in diseases detection [8] [9].

Artificial intelligence applications in health care on the rise


Columbia University professor and robotics engineer Hod Lipson knows the importance of artificial intelligence (AI) on a global level. "It permeates everything we do, from the stock market, from predicting the weather to what product you're going to buy," he said Wednesday during the second day of the virtual Ai4 2020 conference. AI falls into the category of an exponential technology, meaning it accelerates with time. Both biopharma and med-tech companies are increasingly pulling the technology into their business operations, working on programs that can assist in everything from drug discovery and clinical trial recruitment to precision diagnostics and patient compliance efforts. Computing power has doubled every 20 months or so for the past 120 years, Lipson said, moving from mechanical instruments to graphics processing units (GPUs) today.

Adding Seemingly Uninformative Labels Helps in Low Data Regimes Machine Learning

Evidence suggests that networks trained on large datasets generalize well not solely because of the numerous training examples, but also class diversity which encourages learning of enriched features. This raises the question of whether this remains true when data is scarce - is there an advantage to learning with additional labels in low-data regimes? In this work, we consider a task that requires difficult-to-obtain expert annotations: tumor segmentation in mammography images. We show that, in low-data settings, performance can be improved by complementing the expert annotations with seemingly uninformative labels from non-expert annotators, turning the task into a multi-class problem. We reveal that these gains increase when less expert data is available, and uncover several interesting properties through further studies. We demonstrate our findings on CSAW-S, a new dataset that we introduce here, and confirm them on two public datasets.

Deep Learning for Quantile Regression: DeepQuantreg Machine Learning

The computational prediction algorithm of neural network, or deep learning, has drawn much attention recently in statistics as well as in image recognition and natural language processing. Particularly in statistical application for censored survival data, the loss function used for optimization has been mainly based on the partial likelihood from Cox's model and its variations to utilize existing neural network library such as Keras, which was built upon the open source library of TensorFlow. This paper presents a novel application of the neural network to the quantile regression for survival data with right censoring, which is adjusted by the inverse of the estimated censoring distribution in the check function. The main purpose of this work is to show that the deep learning method could be flexible enough to predict nonlinear patterns more accurately compared to the traditional method even in low-dimensional data, emphasizing on practicality of the method for censored survival data. Simulation studies were performed to generate nonlinear censored survival data and compare the deep learning method with the traditional quantile regression method in terms of prediction accuracy. The proposed method is illustrated with a publicly available breast cancer data set with gene signatures. The source code is freely available at