Goto

Collaborating Authors

Results


MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset

arXiv.org Artificial Intelligence

The recent release of large-scale healthcare datasets has greatly propelled the research of data-driven deep learning models for healthcare applications. However, due to the nature of such deep black-boxed models, concerns about interpretability, fairness, and biases in healthcare scenarios where human lives are at stake call for a careful and thorough examinations of both datasets and models. In this work, we focus on MIMIC-IV (Medical Information Mart for Intensive Care, version IV), the largest publicly available healthcare dataset, and conduct comprehensive analyses of dataset representation bias as well as interpretability and prediction fairness of deep learning models for in-hospital mortality prediction. In terms of interpretabilty, we observe that (1) the best performing interpretability method successfully identifies critical features for mortality prediction on various prediction models; (2) demographic features are important for prediction. In terms of fairness, we observe that (1) there exists disparate treatment in prescribing mechanical ventilation among patient groups across ethnicity, gender and age; (2) all of the studied mortality predictors are generally fair while the IMV-LSTM (Interpretable Multi-Variable Long Short-Term Memory) model provides the most accurate and unbiased predictions across all protected groups. We further draw concrete connections between interpretability methods and fairness metrics by showing how feature importance from interpretability methods can be beneficial in quantifying potential disparities in mortality predictors.


Artificial Intelligence In Healthcare -- Everything Artificial Intelligence + Robotics + IoT +

#artificialintelligence

Artificial intelligence (AI), Machine learning, NLP, Robotics, and Automation are increasingly prevalent in all aspects and are being applied to healthcare as well. These technologies have the potential to transform all aspects of health care from patient care to the development and production of new experimental drugs that can have a faster roll-out date than traditional methods. There are numerous research studies suggesting that AI can outperform humans at key healthcare tasks, such as diagnosing ailments. Here is a great example, AI'outperforms' doctors diagnosing breast cancer¹. Artificial intelligence is a collection of technologies that come together form artificial intelligence. Tech firms and startups are also working assiduously on the same issues.


The GTEx Consortium atlas of genetic regulatory effects across human tissues

Science

The Genotype-Tissue Expression (GTEx) project was established to characterize genetic effects on the transcriptome across human tissues and to link these regulatory mechanisms to trait and disease associations. Here, we present analyses of the version 8 data, examining 15,201 RNA-sequencing samples from 49 tissues of 838 postmortem donors. We comprehensively characterize genetic associations for gene expression and splicing in cis and trans, showing that regulatory associations are found for almost all genes, and describe the underlying molecular mechanisms and their contribution to allelic heterogeneity and pleiotropy of complex traits. Leveraging the large diversity of tissues, we provide insights into the tissue specificity of genetic effects and show that cell type composition is a key factor in understanding gene regulatory mechanisms in human tissues.


A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data

arXiv.org Machine Learning

Multi-task learning (MTL) is a machine learning technique aiming to improve model performance by leveraging information across many tasks. It has been used extensively on various data modalities, including electronic health record (EHR) data. However, despite significant use on EHR data, there has been little systematic investigation of the utility of MTL across the diverse set of possible tasks and training schemes of interest in healthcare. In this work, we examine MTL across a battery of tasks on EHR time-series data. We find that while MTL does suffer from common negative transfer, we can realize significant gains via MTL pre-training combined with single-task fine-tuning. We demonstrate that these gains can be achieved in a task-independent manner and offer not only minor improvements under traditional learning, but also notable gains in a few-shot learning context, thereby suggesting this could be a scalable vehicle to offer improved performance in important healthcare contexts.


Caption Health raises Fund for AI-centric medical scanning of the heart

#artificialintelligence

Set in 2013, California based AI-centric healthcare providers, Caption Health has raised a fund of up to 53 million dollars to modify equipment and quicken the medical scanning by their registered nurses without undergoing an elaborative training. It was a revisionist approach by Caption Health CEO, Charles Cadieu to bring alteration in the field of medical science with the help of artificial intelligence. Investors seized this opportunity with the pandemic's onset to envision quick popularization of Caption Health and contributed to equipping better AI-powered Softwares for performing ultrasounds and scans. After receiving the market authorization from the U.S. Food and Drug Administration for cardiac ultrasound software last year, it helped to engage even the non-specialist to conduct the ultrasound where the machine automated reading and interpretation of the search results. It further helped to demonstrate the accuracy of machine learning technologies recently. Robert Ochs, deputy director of the FDA's Office of In Vitro Diagnostics and Radiological Health, commented on it: Cardieu observed the significance of this software as it will be a boon to the COVID patients in this time of crisis by quickly detecting any change in the cardiovascular functions.


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Column: I got tested for COVID-19. Should you?

Los Angeles Times

The last time I traveled along Stadium Way I was headed to a Dodger game, but on Monday afternoon I drove to the fire training center near the ballpark for a much less enjoyable experience. Just a cotton swab and a five-minute drive-through, with results to follow in a few days. I was conflicted about being tested, for two reasons. First, while we definitely needed to ramp up testing back at the beginning of this crisis, I'm wondering if the county has now gone overboard in offering free testing to all residents, whether or not they have symptoms. Second, I'm pretty sure that my minor allergy-like symptoms are just that: allergies.


Privacy-preserving Learning via Deep Net Pruning

arXiv.org Machine Learning

Data privacy has become one of the top concerns in machine learning with deep neural networks, since there is an increasing demand to train deep net models on distributed, private data sets. For example, hospitals are now training their automated diagnosis systems on private patients' data [LST 16, LS17, DFLRP 18]; and advertisement providers are collecting users' online trajectories to optimize their learning-based recommendation algorithm [CAS16, YHC 18]. These private data, however, are usually decentralized in nature, and policies such as the Health Insurance Portability and Accountability Act (HIPAA) [Act96] and the California Consumer Privacy Act (CCPA) [Leg18] restrict the exchange of raw data among distributed users. Various schemes have been proposed for privacy sensitive deep learning with distributed private data, where model updates [KMY 16] or hidden-layer representations [VGSR18] are exchanged instead of the raw data. However, recent research identified that even if the raw data are kept private, sharing the model updates or hidden-layer activations can still leak sensitive information about the input, which we refer to as the victim.


Making Logic Learnable With Neural Networks

arXiv.org Artificial Intelligence

While neural networks are good at learning unspecified functions from training samples, they cannot be directly implemented in hardware and are often not interpretable or formally verifiable. On the other hand, logic circuits are implementable, verifiable, and interpretable but are not able to learn from training data in a generalizable way. We propose a novel logic learning pipeline that combines the advantages of neural networks and logic circuits. Our pipeline first trains a neural network on a classification task, and then translates this, first to random forests or look-up tables, and then to AND-Inverter logic. We show that our pipeline maintains greater accuracy than naive translations to logic, and minimizes the logic such that it is more interpretable and has decreased hardware cost. We show the utility of our pipeline on a network that is trained on biomedical data from patients presenting with gastrointestinal bleeding with the prediction task of determining if patients need immediate hospital-based intervention. This approach could be applied to patient care to provide risk stratification and guide clinical decision-making.


Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies - Pear Therapeutics

#artificialintelligence

Boston and San Francisco, January 7, 2020 – Pear Therapeutics, Inc., the leader in Prescription Digital Therapeutics (PDTs), announced today that it has entered into agreements with multiple technology innovators, including Firsthand Technology, Inc., leading researchers from the Karolinska Institute in Sweden, Cincinnati Children's Hospital Medical Center, Winterlight Labs, Inc., and NeuroLex Laboratories, Inc. These new agreements continue to bolster Pear's PDT platform, by adding to its library of digital biomarkers, machine learning algorithms, and digital therapeutics. Pear's investment in these cutting-edge technologies further supports its strategy to create the broadest and deepest toolset for the development of PDTs that redefine standard of care in a range of therapeutic areas. With access to these new technologies, Pear is positioned to develop PDTs in new disease areas, while leveraging machine learning to personalize and improve its existing PDTs. "We are excited to announce these agreements, which expand the leading PDT platform," said Corey McCann, M.D., Ph.D., President and CEO of Pear.