Goto

Collaborating Authors

Results


How AI Vendors Can Navigate the Health Care Industry

#artificialintelligence

The adoption of AI in health care is being driven by an exponential growth of health data, the broad availability of computational power, and foundational advances in machine learning techniques. AI has already demonstrated the potential to create value by reducing costs, expanding access, and improving quality. But in order for AI to realize its transformative potential at scale, its proponents need business models optimized to best capture that value. AI changes the rules of business and, as ever, there are some unique considerations in health care. In order to understand these, we studied AI across 15 sets of use cases. These span five domains of health care (patient engagement, care delivery, population health, R&D, and administration) and cover three types of functions (measure, decide, and execute).


AI Algorithm Aids Early Detection of Low Ejection Fraction

#artificialintelligence

FRIDAY, May 28, 2021 (HealthDay News) -- An artificial intelligence (AI) algorithm that uses data from electrocardiography can help increase the diagnosis of low ejection fraction (EF), according to a study published online May 6 in Nature Medicine. Xiaoxi Yao, Ph.D., from the Mayo Clinic in Rochester, Minnesota, and colleagues randomly assigned 120 primary care teams, including 358 clinicians, to intervention (access to AI results from the low ejection fraction algorithm developed by Mayo and licensed to Anumana Inc.; 181 clinicians) or control (usual care; 177 clinicians) in a pragmatic trial at 45 clinics and hospitals. A total of 22,641 adult patients with echocardiography performed as part of routine care were included (11,573 in the intervention group; 11,068 controls). The researchers found positive AI results, indicating a high likelihood of low EF, in 6.0 percent of patients in both arms. More echocardiograms were obtained for patients with positive results by clinicians in the intervention group (49.6 versus 38.1 percent), but echocardiogram use was similar in the overall cohort (19.2 versus 18.2 percent).


Mayo Clinic AI algorithm proves effective at spotting early-stage heart disease in routine EKG data

#artificialintelligence

It still remains to be seen whether the sci-fi genre is correct and artificial intelligence will one day rise up against the human race, but in the meantime, AI just might save your life. An algorithm developed by the Mayo Clinic can significantly increase the number of cases of low ejection fraction caught in its earliest stages, when it's still most treatable, according to a study published this month in Nature Medicine. The condition, in which the heart is unable to pump enough blood from its chamber with each contraction, is associated with cardiomyopathy and heart failure and is often symptomless in its early stages. Traditionally, the only way to diagnose low ejection fraction is with the use of an echocardiogram, a time-consuming and expensive cardiac ultrasound. The Mayo Clinic's AI algorithm, however, can screen for low ejection fraction in a standard 12-lead electrocardiogram (EKG) reading, which is a much faster and more readily available tool. In the study, more than 22,600 patients received an EKG as part of their usual primary care checkups, then were randomly assigned to have their results analyzed by the AI or by a physician as usual.


12 Innovations That Will Change Health Care and Medicine in the 2020s

#artificialintelligence

Pocket-size ultrasound devices that cost 50 times less than the machines in hospitals (and connect to your phone). These are just some of the innovations now transforming medicine at a remarkable pace. No one can predict the future, but it can at least be glimpsed in the dozen inventions and concepts below. Like the people behind them, they stand at the vanguard of health care. Neither exhaustive nor exclusive, the list is, rather, representative of the recasting of public health and medical science likely to come in the 2020s.


AI caught a hidden problem in one patient's heart. Can it work for others?

#artificialintelligence

Somewhere in Peter Maercklein's heartbeat was an abnormality no one could find. He survived a stroke 15 years ago, but doctors never saw anything alarming on follow-up electrocardiograms. Then, one day last fall, an artificial intelligence algorithm read his EKGs and spotted something else: a ripple in the calm that indicated an elevated risk of atrial fibrillation. Specifically, the algorithm, created by physicians at Mayo Clinic, found Maercklein had an 81.49% probability of experiencing A-fib, a quivering or irregular heartbeat that can lead to heart failure and stroke. Just days later, after Maercklein agreed to participate in a research study, a wearable Holter monitor recorded an episode of A-fib while he was walking on a treadmill.


Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance

arXiv.org Machine Learning

Machine learning models $-$ now commonly developed to screen, diagnose, or predict health conditions $-$ are evaluated with a variety of performance metrics. An important first step in assessing the practical utility of a model is to evaluate its average performance over an entire population of interest. In many settings, it is also critical that the model makes good predictions within predefined subpopulations. For instance, showing that a model is fair or equitable requires evaluating the model's performance in different demographic subgroups. However, subpopulation performance metrics are typically computed using only data from that subgroup, resulting in higher variance estimates for smaller groups. We devise a procedure to measure subpopulation performance that can be more sample-efficient than the typical subsample estimates. We propose using an evaluation model $-$ a model that describes the conditional distribution of the predictive model score $-$ to form model-based metric (MBM) estimates. Our procedure incorporates model checking and validation, and we propose a computationally efficient approximation of the traditional nonparametric bootstrap to form confidence intervals. We evaluate MBMs on two main tasks: a semi-synthetic setting where ground truth metrics are available and a real-world hospital readmission prediction task. We find that MBMs consistently produce more accurate and lower variance estimates of model performance for small subpopulations.


Are medical AI devices evaluated appropriately?

#artificialintelligence

In just the last two years, artificial intelligence has become embedded in scores of medical devices that offer advice to ER doctors, cardiologists, oncologists, and countless other health care providers. The Food and Drug Administration has approved at least 130 AI-powered medical devices, half of them in the last year alone, and the numbers are certain to surge far higher in the next few years. Several AI devices aim at spotting and alerting doctors to suspected blood clots in the lungs. Some analyze mammograms and ultrasound images for signs of breast cancer, while others examine brain scans for signs of hemorrhage. Cardiac AI devices can now flag a wide range of hidden heart problems.


Automated Seizure Detection and Seizure Type Classification From Electroencephalography With a Graph Neural Network and Self-Supervised Pre-Training

arXiv.org Artificial Intelligence

Automated seizure detection and classification from electroencephalography (EEG) can greatly improve the diagnosis and treatment of seizures. While prior studies mainly used convolutional neural networks (CNNs) that assume image-like structure in EEG signals or spectrograms, this modeling choice does not reflect the natural geometry of or connectivity between EEG electrodes. In this study, we propose modeling EEGs as graphs and present a graph neural network for automated seizure detection and classification. In addition, we leverage unlabeled EEG data using a self-supervised pre-training strategy. Our graph model with self-supervised pre-training significantly outperforms previous state-of-the-art CNN and Long Short-Term Memory (LSTM) models by 6.3 points (7.8%) in Area Under the Receiver Operating Characteristic curve (AUROC) for seizure detection and 6.3 points (9.2%) in weighted F1-score for seizure type classification. Ablation studies show that our graph-based modeling approach significantly outperforms existing CNN or LSTM models, and that self-supervision helps further improve the model performance. Moreover, we find that self-supervised pre-training substantially improves model performance on combined tonic seizures, a low-prevalence seizure type. Furthermore, our model interpretability analysis suggests that our model is better at identifying seizure regions compared to an existing CNN. In summary, our graph-based modeling approach integrates domain knowledge about EEG, sets a new state-of-the-art for seizure detection and classification on a large public dataset (5,499 EEG files), and provides better ability to identify seizure regions.


Self-supervised representation learning from 12-lead ECG data

arXiv.org Machine Learning

We put forward a comprehensive assessment of self-supervised representation learning from short segments of clinical 12-lead electrocardiography (ECG) data. To this end, we explore adaptations of state-of-the-art self-supervised learning algorithms from computer vision (SimCLR, BYOL, SwAV) and speech (CPC). In a first step, we learn contrastive representations and evaluate their quality based on linear evaluation performance on a downstream classification task. For the best-performing method, CPC, we find linear evaluation performances only 0.8% below supervised performance. In a second step, we analyze the impact of self-supervised pretraining on finetuned ECG classifiers as compared to purely supervised performance and find improvements in downstream performance of more than 1%, label efficiency, as well as an increased robustness against physiological noise. All experiments are carried out exclusively on publicly available datasets, the to-date largest collection used for self-supervised representation learning from ECG data, to foster reproducible research in the field of ECG representation learning.


Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges

arXiv.org Machine Learning

Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.