Goto

Collaborating Authors

Results


Why the Future of Healthcare is Federated AI - insideBIGDATA

#artificialintelligence

In this special guest feature, Akshay Sharma, Executive Vice President of Artificial Intelligence (AI) at Sharecare, highlights advancements and impact of federated AI and edge computing for the healthcare sector as it ensures data privacy and expands the breadth of individual, organizational, and clinical knowledge. Sharma joined Sharecare in 2021 as part of its acquisition of doc.ai, the Silicon Valley-based company that accelerated digital transformation in healthcare. Sharma previously held various leadership positions including CTO, and vice president of engineering, a role in which he developed several key technologies that power mobile-based privacy products in healthcare. In addition to his role at Sharecare, Sharma serves as CTO of TEDxSanFrancisco and also is involved in initiatives to decentralize clinical trials. Sharma holds bachelor's degrees in engineering and engineering in information science from Visvesvaraya Technological University.


How can we keep algorithmic racism out of Canadian health care's AI toolkit?

#artificialintelligence

In health care, the promise of artificial intelligence is alluring: With the help of big data sets and algorithms, AI can aid difficult decisions, like triaging patients and determining diagnoses. And since AI leans on statistics rather than human interpretation, the idea is that it's neutral – it treats everyone in a given data set equally. In October 2019, a study published in the prestigious journal Science showed that a widely used algorithm that predicts which patients will benefit from extra medical care dramatically underestimated the health needs of the sickest Black patients. The algorithm, sold by a health services company called Optum, embodied "significant racial bias," the authors concluded, suggesting that tools used by health systems to manage the care of about 200 million Americans could incorporate similar biases. The problem was fundamental: The commercial algorithm focused on costs, not illness. In looking at which patients would benefit from additional health care services, it underestimated the needs of Black patients because they had cost the system less. But Black patients' costs weren't lower because the patients were healthier; they were lower because they had unequal access to care.


Using a Personal Health Library-Enabled mHealth Recommender System for Self-Management of Diabetes Among Underserved Populations: Use Case for Knowledge Graphs and Linked Data

arXiv.org Artificial Intelligence

Personal health libraries (PHLs) provide a single point of secure access to patients digital health data and enable the integration of knowledge stored in their digital health profiles with other sources of global knowledge. PHLs can help empower caregivers and health care providers to make informed decisions about patients health by understanding medical events in the context of their lives. This paper reports the implementation of a mobile health digital intervention that incorporates both digital health data stored in patients PHLs and other sources of contextual knowledge to deliver tailored recommendations for improving self-care behaviors in diabetic adults. We conducted a thematic assessment of patient functional and nonfunctional requirements that are missing from current EHRs based on evidence from the literature. We used the results to identify the technologies needed to address those requirements. We describe the technological infrastructures used to construct, manage, and integrate the types of knowledge stored in the PHL. We leverage the Social Linked Data (Solid) platform to design a fully decentralized and privacy-aware platform that supports interoperability and care integration. We provided an initial prototype design of a PHL and drafted a use case scenario that involves four actors to demonstrate how the proposed prototype can be used to address user requirements, including the construction and management of the PHL and its utilization for developing a mobile app that queries the knowledge stored and integrated into the PHL in a private and fully decentralized manner to provide better recommendations. The proposed PHL helps patients and their caregivers take a central role in making decisions regarding their health and equips their health care providers with informatics tools that support the collection and interpretation of the collected knowledge.


Supervised Learning in the Presence of Noise: Application in ICD-10 Code Classification

arXiv.org Artificial Intelligence

ICD coding is the international standard for capturing and reporting health conditions and diagnosis for revenue cycle management in healthcare. Manually assigning ICD codes is prone to human error due to the large code vocabulary and the similarities between codes. Since machine learning based approaches require ground truth training data, the inconsistency among human coders is manifested as noise in labeling, which makes the training and evaluation of ICD classifiers difficult in presence of such noise. This paper investigates the characteristics of such noise in manually-assigned ICD-10 codes and furthermore, proposes a method to train robust ICD-10 classifiers in the presence of labeling noise. Our research concluded that the nature of such noise is systematic. Most of the existing methods for handling label noise assume that the noise is completely random and independent of features or labels, which is not the case for ICD data. Therefore, we develop a new method for training robust classifiers in the presence of systematic noise. We first identify ICD-10 codes that human coders tend to misuse or confuse, based on the codes' locations in the ICD-10 hierarchy, the types of the codes, and baseline classifier's prediction behaviors; we then develop a novel training strategy that accounts for such noise. We compared our method with the baseline that does not handle label noise and the baseline methods that assume random noise, and demonstrated that our proposed method outperforms all baselines when evaluated on expert validated labels.


4 Artificial Intelligence Use Cases for Global Health from USAID - ICTworks

#artificialintelligence

Artificial intelligence (AI) has potential to drive game-changing improvements for underserved communities in global health. In response, The Rockefeller Foundation and USAID partnered with the Bill and Melinda Gates Foundation to develop AI in Global Health: Defining a Collective Path Forward. Research began with a broad scan of instances where artificial intelligence is being used, tested, or considered in healthcare, resulting in a catalogue of over 240 examples. This grouping involves tools that leverage AI to monitor and assess population health, and select and target public health interventions based on AI-enabled predictive analytics. It includes AI-driven data processing methods that map the spread and burden of disease while AI predictive analytics are then used to project future disease spread of existing and possible outbreaks.


Applications of IoT for Healthcare

#artificialintelligence

Over the past few centuries, healthcare technology has come a long way--from the invention of the stethoscope in 1816 to robots performing surgery in 2020. As computers became more common starting in the 1960s and 1970s, researchers began to explore how they might enhance healthcare, and the first electronic health record (EHR) systems appeared by 1965 in the U.S. But it wasn't until the 1980s and 1990s that clinicians began to rely on computers for data management. Internet connectivity paved the way for much better data management, and EHRs became far more common in the 2000s. On the clinical side, healthcare technology improved greatly between the 1950s and the turn of the twenty-first century.


Clustering Left-Censored Multivariate Time-Series

arXiv.org Machine Learning

Unsupervised learning seeks to uncover patterns in data. However, different kinds of noise may impede the discovery of useful substructure from real-world time-series data. In this work, we focus on mitigating the interference of left-censorship in the task of clustering. We provide conditions under which clusters and left-censorship may be identified; motivated by this result, we develop a deep generative, continuous-time model of time-series data that clusters while correcting for censorship time. We demonstrate accurate, stable, and interpretable results on synthetic data that outperform several benchmarks. To showcase the utility of our framework on real-world problems, we study how left-censorship can adversely affect the task of disease phenotyping, resulting in the often incorrect assumption that longitudinal patient data are aligned by disease stage. In reality, patients at the time of diagnosis are at different stages of the disease -- both late and early due to differences in when patients seek medical care and such discrepancy can confound unsupervised learning algorithms. On two clinical datasets, our model corrects for this form of censorship and recovers known clinical subtypes.


MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset

arXiv.org Artificial Intelligence

The recent release of large-scale healthcare datasets has greatly propelled the research of data-driven deep learning models for healthcare applications. However, due to the nature of such deep black-boxed models, concerns about interpretability, fairness, and biases in healthcare scenarios where human lives are at stake call for a careful and thorough examinations of both datasets and models. In this work, we focus on MIMIC-IV (Medical Information Mart for Intensive Care, version IV), the largest publicly available healthcare dataset, and conduct comprehensive analyses of dataset representation bias as well as interpretability and prediction fairness of deep learning models for in-hospital mortality prediction. In terms of interpretabilty, we observe that (1) the best performing interpretability method successfully identifies critical features for mortality prediction on various prediction models; (2) demographic features are important for prediction. In terms of fairness, we observe that (1) there exists disparate treatment in prescribing mechanical ventilation among patient groups across ethnicity, gender and age; (2) all of the studied mortality predictors are generally fair while the IMV-LSTM (Interpretable Multi-Variable Long Short-Term Memory) model provides the most accurate and unbiased predictions across all protected groups. We further draw concrete connections between interpretability methods and fairness metrics by showing how feature importance from interpretability methods can be beneficial in quantifying potential disparities in mortality predictors.


Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities

arXiv.org Artificial Intelligence

Advances in algorithmic fairness have largely omitted sexual orientation and gender identity. We explore queer concerns in privacy, censorship, language, online safety, health, and employment to study the positive and negative effects of artificial intelligence on queer communities. These issues underscore the need for new directions in fairness research that take into account a multiplicity of considerations, from privacy preservation, context sensitivity and process fairness, to an awareness of sociotechnical impact and the increasingly important role of inclusive and participatory research processes. Most current approaches for algorithmic fairness assume that the target characteristics for fairness--frequently, race and legal gender--can be observed or recorded. Sexual orientation and gender identity are prototypical instances of unobserved characteristics, which are frequently missing, unknown or fundamentally unmeasurable. This paper highlights the importance of developing new approaches for algorithmic fairness that break away from the prevailing assumption of observed characteristics.


Obsolete Personal Information Update System for the Prevention of Falls among Elderly Patients

arXiv.org Artificial Intelligence

Falls are a common problem affecting the older adults and a major public health issue. Centers for Disease Control and Prevention, and World Health Organization report that one in three adults over the age of 65 and half of the adults over 80 fall each year. In recent years, an ever-increasing range of applications have been developed to help deliver more effective falls prevention interventions. All these applications rely on a huge elderly personal database collected from hospitals, mutual health, and other organizations in caring for elderly. The information describing an elderly is continually evolving and may become obsolete at a given moment and contradict what we already know on the same person. So, it needs to be continuously checked and updated in order to restore the database consistency and then provide better service. This paper provides an outline of an Obsolete personal Information Update System (OIUS) designed in the context of the elderly-fall prevention project. Our OIUS aims to control and update in real-time the information acquired about each older adult, provide on-demand consistent information and supply tailored interventions to caregivers and fall-risk patients. The approach outlined for this purpose is based on a polynomial-time algorithm build on top of a causal Bayesian network representing the elderly data. The result is given as a recommendation tree with some accuracy level. We conduct a thorough empirical study for such a model on an elderly personal information base. Experiments confirm the viability and effectiveness of our OIUS.