Collaborating Authors


New study examines mortality costs of air pollution in US


A team of University of Illinois researchers estimated the mortality costs associated with air pollution in the U.S. by developing and applying a novel machine learning-based method to estimate the life-years lost and cost associated with air pollution exposure. Scholars from the Gies College of Business at Illinois studied the causal effects of acute fine particulate matter exposure on mortality, health care use and medical costs among older Americans through Medicare data and a unique way of measuring air pollution via changes in local wind direction. The researchers--Tatyana Deryugina, Nolan Miller, David Molitor and Julian Reif--calculated that the reduction in particulate matter experienced between 1999-2013 resulted in elderly mortality reductions worth $24 billion annually by the end of that period. Garth Heutel of Georgia State University and the National Bureau of Economic Research was a co-author of the paper. "Our goal with this paper was to quantify the costs of air pollution on mortality in a particularly vulnerable population: the elderly," said Deryugina, a professor of finance who studies the health effects and distributional impact of air pollution.

Could AI reduce healthcare waste? Approach of online retailers could provide clues - MedCity News


When Amazon envisioned Alexa, an AI-powered, voice-activated customer recommendation system, it was a feat that required machine learning and massive amounts of data to provide answers to conversational queries quickly, even in a noisy environment. Now, the same data analysis capabilities that enable Amazon to become hyper-familiar with consumer purchasing patterns could hold the key to reducing waste in healthcare. Think about the similarities between healthcare and retail. Both industries revolve around the consumer, and they use data to gain context into behavior and draw meaningful conclusions. In healthcare, this includes the ability to predict which consumers could develop type 2 diabetes with 95% accuracy or to pinpoint where and when the Covid-19 virus will spread and how to protect those most vulnerable.

Is Clover Health Stock a Buy?


The company sells Medicare Advantage plans, focusing on customer experience and leveraging machine learning and artificial intelligence to …

Now Streaming: Government Data


The concept of data streaming is not new. But one of the most critical emerging uses for streaming data is in the public sector, where government agencies are eyeing its game-changing capability to advance everything from battlefield decision-making to constituent experience. IDC predicts that the collective sum of the world's data will grow 33%, to 175 zettabytes, by 2025. For context, at today's average internet connection speeds, 175 zettabytes would take 1.8 billion years for one person to download. Streaming has only further accelerated the velocity of data growth.

Hitting the Books: AI doctors and the dangers tiered medical care


Healthcare is a human right, however, nobody said all coverage is created equal. Artificial intelligence and machine learning systems are already making impressive inroads into the myriad fields of medicine -- from IBM's Watson: Hospital Edition and Amazon's AI-generated medical records to machine-formulated medications and AI-enabled diagnoses. But in the excerpt below from Frank Pasquale's New Laws of Robotics we can see how the promise of faster, cheaper, and more efficient medical diagnoses generated by AI/ML systems can also serve as a double-edged sword, potentially cutting off access to cutting-edge, high quality care provided by human doctors. Excerpted from New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale, published by The Belknap Press of Harvard University Press. We might once have categorized a melanoma simply as a type of skin cancer.

AI-Powered Text From This Program Could Fool the Government


In October 2019, Idaho proposed changing its Medicaid program. The state needed approval from the federal government, which solicited public feedback via But half came not from concerned citizens or even internet trolls. They were generated by artificial intelligence. And a study found that people could not distinguish the real comments from the fake ones.

Sparse encoding for more-interpretable feature-selecting representations in probabilistic matrix factorization Machine Learning

Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -- the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.

UVA Artificial Intelligence Project Among 7 Finalists for $1 Million Prize


A UVA Health data science team is one of seven finalists in a national competition to improve healthcare with the help of artificial intelligence. UVA's proposal was selected as a finalist from among more than 300 applicants in the first-ever Centers for Medicare & Medicaid Services (CMS) Artificial Intelligence Health Outcomes Challenge. UVA's project predicts which patients are at risk for adverse outcomes and then suggests a personalized plan to ensure appropriate healthcare delivery and avoid unnecessary hospitalizations. CMS selected the seven finalists after reviewing the accuracy of their artificial intelligence models and evaluating how well healthcare providers could use visual displays created by each project team to improve outcomes and patient care. Each team of finalists received $60,000 and will compete for a grand prize of up to $1 million.

UVA artificial intelligence project among finalists for national challenge


The proposal was chosen as part of the first Centers for Medicare and Medicaid Services Artificial Intelligence Health Outcomes Challenge. It predicts the outcomes for patients and suggests a personalized plan for their health care delivery to avoid unnecessary trips to the hospital.

AI to the Rescue


America is facing a health care crisis primarily due to its aging population. Physician shortages have come to the forefront recently, as many hospitals are overwhelmed due to the COVID-19 pandemic. In truth, our looming physician shortage is a generation in the making, as baby boomer doctors retire in droves. This is all occurring as lifespans are increasing--hence, there are fewer doctors to treat more patients. Exacerbating the problem is that medical schools are not churning out medical students fast enough due to capacity constraints, and it takes 12 to 15 years to train a doctor. Today, more than half of active physicians are older than 55, and by the year 2032, the Association of American Medical Colleges projects a shortfall of 122,000 doctors in the United States.