There is no greater challenge for healthcare and life science organizations than ensuring that their digital transformation along with better data management will improve patient outcomes, increase operational efficiency and productivity, and better financial results. The drivers of healthcare and life science's transition from data rich to data driven are not new and include the race to manage cost and improve quality. Some new drivers include the growth of at risk contracting for providers, the threat of care delivery disruption by the retail industry and the impact of drug discovery in the challenge to balance speed to market with costs. Health and life science industries are data rich. IDC estimates that on average, approximately 270 GB of healthcare and life science data will be created for every person in the world in 2020. Transformation of data into insights creates the value for health and life science organizations coupled with organizations establishing a data driven culture.
In 2020, the largest U.S. health care payer, the Centers for Medicare & Medicaid Services (CMS), established payment for artificial intelligence (AI) through two different systems in the Medicare Physician Fee Schedule (MPFS) and the Inpatient Prospective Payment System (IPPS). Within the MPFS, a new Current Procedural Terminology code was valued for an AI tool for diagnosis of diabetic retinopathy, IDx-RX. In the IPPS, Medicare established a New Technology Add-on Payment for Viz.ai software, an AI algorithm that facilitates diagnosis and treatment of large-vessel occlusion strokes. This article describes reimbursement in these two payment systems and proposes future payment pathways for AI.
In 2019, The Centers for Medicare and Medicaid Services (CMS) launched an Artificial Intelligence (AI) Health Outcomes Challenge seeking solutions to predict risk in value-based care for incorporation into CMS Innovation Center payment and service delivery models. Recently, modern language models have played key roles in a number of health related tasks. This paper presents, to the best of our knowledge, the first application of these models to patient readmission prediction. To facilitate this, we create a dataset of 1.2 million medical history samples derived from the Limited Dataset (LDS) issued by CMS. Moreover, we propose a comprehensive modeling solution centered on a deep learning framework for this data. To demonstrate the framework, we train an attention-based Transformer to learn Medicare semantics in support of performing downstream prediction tasks thereby achieving 0.91 AUC and 0.91 recall on readmission classification. We also introduce a novel data pre-processing pipeline and discuss pertinent deployment considerations surrounding model explainability and bias.
Health insurance is a critical component of the healthcare industry with private health insurance expenditures alone estimated at $1.1 billion in 2016, according to the latest data available from the Centers for Medicare and Medicaid Services. This figure represents 34 percent of the 2016 National Health Expenditure at $3.3 trillion. In this article, we will look at four AI applications that are tackling problems of underutilization and fraud in the insurance industry. Some applications below claim that they are using artificial intelligence to help improve health insurance cost efficiency, while reducing waste of money on underutilized or preventable care. Other applications claim to detect fraudulent claims.
The concept of data streaming is not new. But one of the most critical emerging uses for streaming data is in the public sector, where government agencies are eyeing its game-changing capability to advance everything from battlefield decision-making to constituent experience. IDC predicts that the collective sum of the world's data will grow 33%, to 175 zettabytes, by 2025. For context, at today's average internet connection speeds, 175 zettabytes would take 1.8 billion years for one person to download. Streaming has only further accelerated the velocity of data growth.
Healthcare is a human right, however, nobody said all coverage is created equal. Artificial intelligence and machine learning systems are already making impressive inroads into the myriad fields of medicine -- from IBM's Watson: Hospital Edition and Amazon's AI-generated medical records to machine-formulated medications and AI-enabled diagnoses. But in the excerpt below from Frank Pasquale's New Laws of Robotics we can see how the promise of faster, cheaper, and more efficient medical diagnoses generated by AI/ML systems can also serve as a double-edged sword, potentially cutting off access to cutting-edge, high quality care provided by human doctors. Excerpted from New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale, published by The Belknap Press of Harvard University Press. We might once have categorized a melanoma simply as a type of skin cancer.
In October 2019, Idaho proposed changing its Medicaid program. The state needed approval from the federal government, which solicited public feedback via Medicaid.gov. But half came not from concerned citizens or even internet trolls. They were generated by artificial intelligence. And a study found that people could not distinguish the real comments from the fake ones.
Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -- the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.
A UVA Health data science team is one of seven finalists in a national competition to improve healthcare with the help of artificial intelligence. UVA's proposal was selected as a finalist from among more than 300 applicants in the first-ever Centers for Medicare & Medicaid Services (CMS) Artificial Intelligence Health Outcomes Challenge. UVA's project predicts which patients are at risk for adverse outcomes and then suggests a personalized plan to ensure appropriate healthcare delivery and avoid unnecessary hospitalizations. CMS selected the seven finalists after reviewing the accuracy of their artificial intelligence models and evaluating how well healthcare providers could use visual displays created by each project team to improve outcomes and patient care. Each team of finalists received $60,000 and will compete for a grand prize of up to $1 million.