Health insurance is a critical component of the healthcare industry with private health insurance expenditures alone estimated at $1.1 billion in 2016, according to the latest data available from the Centers for Medicare and Medicaid Services. This figure represents 34 percent of the 2016 National Health Expenditure at $3.3 trillion. In this article, we will look at four AI applications that are tackling problems of underutilization and fraud in the insurance industry. Some applications below claim that they are using artificial intelligence to help improve health insurance cost efficiency, while reducing waste of money on underutilized or preventable care. Other applications claim to detect fraudulent claims.
A team of University of Illinois researchers estimated the mortality costs associated with air pollution in the U.S. by developing and applying a novel machine learning-based method to estimate the life-years lost and cost associated with air pollution exposure. Scholars from the Gies College of Business at Illinois studied the causal effects of acute fine particulate matter exposure on mortality, health care use and medical costs among older Americans through Medicare data and a unique way of measuring air pollution via changes in local wind direction. The researchers--Tatyana Deryugina, Nolan Miller, David Molitor and Julian Reif--calculated that the reduction in particulate matter experienced between 1999-2013 resulted in elderly mortality reductions worth $24 billion annually by the end of that period. Garth Heutel of Georgia State University and the National Bureau of Economic Research was a co-author of the paper. "Our goal with this paper was to quantify the costs of air pollution on mortality in a particularly vulnerable population: the elderly," said Deryugina, a professor of finance who studies the health effects and distributional impact of air pollution.
When Amazon envisioned Alexa, an AI-powered, voice-activated customer recommendation system, it was a feat that required machine learning and massive amounts of data to provide answers to conversational queries quickly, even in a noisy environment. Now, the same data analysis capabilities that enable Amazon to become hyper-familiar with consumer purchasing patterns could hold the key to reducing waste in healthcare. Think about the similarities between healthcare and retail. Both industries revolve around the consumer, and they use data to gain context into behavior and draw meaningful conclusions. In healthcare, this includes the ability to predict which consumers could develop type 2 diabetes with 95% accuracy or to pinpoint where and when the Covid-19 virus will spread and how to protect those most vulnerable.
The concept of data streaming is not new. But one of the most critical emerging uses for streaming data is in the public sector, where government agencies are eyeing its game-changing capability to advance everything from battlefield decision-making to constituent experience. IDC predicts that the collective sum of the world's data will grow 33%, to 175 zettabytes, by 2025. For context, at today's average internet connection speeds, 175 zettabytes would take 1.8 billion years for one person to download. Streaming has only further accelerated the velocity of data growth.
Healthcare is a human right, however, nobody said all coverage is created equal. Artificial intelligence and machine learning systems are already making impressive inroads into the myriad fields of medicine -- from IBM's Watson: Hospital Edition and Amazon's AI-generated medical records to machine-formulated medications and AI-enabled diagnoses. But in the excerpt below from Frank Pasquale's New Laws of Robotics we can see how the promise of faster, cheaper, and more efficient medical diagnoses generated by AI/ML systems can also serve as a double-edged sword, potentially cutting off access to cutting-edge, high quality care provided by human doctors. Excerpted from New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale, published by The Belknap Press of Harvard University Press. We might once have categorized a melanoma simply as a type of skin cancer.
In October 2019, Idaho proposed changing its Medicaid program. The state needed approval from the federal government, which solicited public feedback via Medicaid.gov. But half came not from concerned citizens or even internet trolls. They were generated by artificial intelligence. And a study found that people could not distinguish the real comments from the fake ones.
Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -- the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.
A UVA Health data science team is one of seven finalists in a national competition to improve healthcare with the help of artificial intelligence. UVA's proposal was selected as a finalist from among more than 300 applicants in the first-ever Centers for Medicare & Medicaid Services (CMS) Artificial Intelligence Health Outcomes Challenge. UVA's project predicts which patients are at risk for adverse outcomes and then suggests a personalized plan to ensure appropriate healthcare delivery and avoid unnecessary hospitalizations. CMS selected the seven finalists after reviewing the accuracy of their artificial intelligence models and evaluating how well healthcare providers could use visual displays created by each project team to improve outcomes and patient care. Each team of finalists received $60,000 and will compete for a grand prize of up to $1 million.