Collaborating Authors


Data Science in Healthcare - 7 Applications No one will Tell You - DataFlair


Data Science is rapidly growing to occupy all the industries of the world today. In this topic, we will understand how data science is transforming the healthcare sector. We will understand various underlying concepts of data science, used in medicine and biotechnology. Medicine and healthcare are two of the most important part of our human lives. Traditionally, medicine solely relied on the discretion advised by the doctors. For example, a doctor would have to suggest suitable treatments based on a patient's symptoms.

The Trouble with Brain Scans - Issue 98: Mind


One autumn afternoon in the bowels of UC Berkeley's Li Ka Shing Center, I was looking at my brain. I had just spent 10 minutes inside the 3 Tesla MRI scanner, the technical name for a very expensive, very high maintenance, very magnetic brain camera. Lying on my back inside the narrow tube, I had swallowed my claustrophobia and let myself be enveloped in darkness and a cacophony of foghorn-like bleats. At the time I was a research intern at UC Berkeley's Neuroeconomics Lab. That was the first time I saw my own brain from an MRI scan. It was a grayscale, 3-D reconstruction floating on the black background of a computer screen. As an undergraduate who studied neuroscience, I was enraptured. There is nothing quite like a young scientist's first encounter with an imaging technology that renders the hitherto invisible visible--magnetic resonance imaging took my breath away. I felt that I was looking not just inside my body, but into the biological recesses of my mind. It was a strange self-image, if indeed it was one.

How AI in healthcare is transforming medical imaging


Medical imaging is amongst the most promising clinical applications of AI, and its ability to detect and qualify a wide arrange of medical conditions. Medical imaging is fundamental in clinical diagnosis, patient treatment and medical research. Leveraging computer-aided diagnostics can drastically improve accuracy and specificity for the detection of even the smallest radiographic abnormalities. Medical imaging produces huge datasets, which would traditionally be analysed in real-time by radiologists. However, in the light of a global pandemic, demand is mounting, and backlogs are growing.

Major flaws found in machine learning for COVID-19 diagnosis


A coalition of AI researchers and health care professionals in fields like infectious disease, radiology, and ontology have found several common but serious shortcomings with machine learning made for COVID-19 diagnosis or prognosis. After the start of the global pandemic, startups like DarwinAI, major companies like Nvidia, and groups like the American College of Radiology launched initiatives to detect COVID-19 from CT scans, X-rays, or other forms of medical imaging. The promise of such technology is that it could help health care professionals distinguish between pneumonia and COVID-19 or provide more options for patient diagnosis. Some models have even been developed to predict if a person will die or need a ventilator based on a CT scan. However, researchers say major changes are needed before this form of machine learning can be used in a clinical setting.

Scientists read minds of monkeys using new ultrasound technique


Brain-machine interfaces are one of those incredible ideas that were once the reserve of science fiction. However, in recent years scientists have begun to experiment with primitive forms of the technology, even going as far as helping a quadriplegic control an exoskeleton using tiny electrode sensors implanted in his brain. Perhaps the most well-known recent investigation into brain-machine interfaces has come from Elon Musk's Neuralink, which is attempting to develop a tiny, easily implantable device that can instantly read and relay neural activity. While Neuralink is working to create a device that can be delivered into one's brain easily, these kinds of brain-machine interfaces still fundamentally require some kind of device to be surgically implanted. A new study led by researchers from Caltech is demonstrating a non-invasive brain-machine interface using functional ultrasound (fUS) technology.

Researchers enhance Alzheimer's disease classification through artificial intelligence


Spotting these clues may allow for lifestyle changes that could possibly delay the disease's destruction of the brain. "Improving the diagnostic accuracy of Alzheimer's disease is an important clinical goal. If we are able to increase the diagnostic accuracy of the models in ways that can leverage existing data such as MRI scans, then that can be hugely beneficial," explained corresponding author Vijaya B. Kolachalama, PhD, assistant professor of medicine at Boston University School of Medicine (BUSM). Using an advanced AI (artificial intelligence) framework based on game theory (known as generative adversarial network or GAN), Kolachalama and his team processed brain images (some low and high quality) to generate a model that was able to classify Alzheimer's disease with improved accuracy. Quality of an MRI scan is dependent on the scanner instrument that is used.

Addressing catastrophic forgetting for medical domain expansion Artificial Intelligence

Model brittleness is a key concern when deploying deep learning models in real-world medical settings. A model that has high performance at one institution may suffer a significant decline in performance when tested at other institutions. While pooling datasets from multiple institutions and re-training may provide a straightforward solution, it is often infeasible and may compromise patient privacy. An alternative approach is to fine-tune the model on subsequent institutions after training on the original institution. Notably, this approach degrades model performance at the original institution, a phenomenon known as catastrophic forgetting. In this paper, we develop an approach to address catastrophic forgetting based on elastic weight consolidation combined with modulation of batch normalization statistics under two scenarios: first, for expanding the domain from one imaging system's data to another imaging system's, and second, for expanding the domain from a large multi-institutional dataset to another single institution dataset. We show that our approach outperforms several other state-of-the-art approaches and provide theoretical justification for the efficacy of batch normalization modulation. The results of this study are generally applicable to the deployment of any clinical deep learning model which requires domain expansion.

Five ways AI can democratise African healthcare


Although the potential for artificial intelligence to transform healthcare in lower income countries has been much hyped, the technology is proving genuinely useful in helping Africa overcome difficulties in tackling diseases. Such technology can automate medical tasks and help doctors to do more with limited resources. It can even accelerate advances if certain barriers are overcome. The work of minoHealth AI Labs, the Ghana-based data science start-up that I founded, offers one example. By collecting medical images, we are seeking to automate radiology through the use of deep learning.

IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography Artificial Intelligence

Interpretability in machine learning models is important in high-stakes decisions, such as whether to order a biopsy based on a mammographic exam. Mammography poses important challenges that are not present in other computer vision tasks: datasets are small, confounding information is present, and it can be difficult even for a radiologist to decide between watchful waiting and biopsy based on a mammogram alone. In this work, we present a framework for interpretable machine learning-based mammography. In addition to predicting whether a lesion is malignant or benign, our work aims to follow the reasoning processes of radiologists in detecting clinically relevant semantic features of each image, such as the characteristics of the mass margins. The framework includes a novel interpretable neural network algorithm that uses case-based reasoning for mammography. Our algorithm can incorporate a combination of data with whole image labelling and data with pixel-wise annotations, leading to better accuracy and interpretability even with a small number of images. Our interpretable models are able to highlight the classification-relevant parts of the image, whereas other methods highlight healthy tissue and confounding information. Our models are decision aids, rather than decision makers, aimed at better overall human-machine collaboration. We do not observe a loss in mass margin classification accuracy over a black box neural network trained on the same data.

Conditional Training with Bounding Map for Universal Lesion Detection Artificial Intelligence

Universal Lesion Detection (ULD) in computed tomography plays an essential role in computer-aided diagnosis. Promising ULD results have been reported by coarse-to-fine two-stage detection approaches, but such two-stage ULD methods still suffer from issues like imbalance of positive v.s. negative anchors during object proposal and insufficient supervision problem during localization regression and classification of the region of interest (RoI) proposals. While leveraging pseudo segmentation masks such as bounding map (BM) can reduce the above issues to some degree, it is still an open problem to effectively handle the diverse lesion shapes and sizes in ULD. In this paper, we propose a BM-based conditional training for two-stage ULD, which can (i) reduce positive vs. negative anchor imbalance via BM-based conditioning (BMC) mechanism for anchor sampling instead of traditional IoU-based rule; and (ii) adaptively compute size-adaptive BM (ABM) from lesion bounding box, which is used for improving lesion localization accuracy via ABMsupervised segmentation. Experiments with four state-of-the-art methods show that the proposed approach can bring an almost free detection accuracy improvement without requiring expensive lesion mask annotations.