Goto

Collaborating Authors

 medicaid


4 Senate amendments to Trump megabill that failed -- and 1 that passed

FOX News

Fox News' Chad Pergram reports the latest on the Senate's vote-a-rama from Capitol Hill. Many senators failed to get their amendments across the finish line during the chamber's vote-a-rama on Monday, leaving the future of President Donald Trump's "big, beautiful bill" uncertain. Two key failures came from Sen. Susan Collins, R-Maine, and Sen. John Cornyn, R-Texas, with the former proposing a plan that would have boosted funding for rural hospitals and the latter calling for further cuts to Medicaid. Collins and Cornyn were far from the only lawmakers who had amendments fail, however. Here are some details on some of the unsuccessful efforts, plus one that succeeded with nearly unanimous support.


Dr Oz tells federal health workers AI could replace frontline doctors

The Guardian

Dr Mehmet Oz reportedly told federal staffers that artificial intelligence models may be better than frontline human physicians in his first all-staff meeting this week. Oz told staffers that if a patient went to the doctor for a diabetes diagnosis it would cost roughly 100 an hour, compared with 2 an hour for an AI visit, according to unnamed sources who spoke to Wired magazine. He added that patients may prefer an AI avatar. Oz also spent a portion of his first meeting with employees arguing they had a "patriotic duty" to remain healthy, with the goal of decreasing costs to the health insurance system. He made a similar argument at his confirmation hearing.


Examining Imbalance Effects on Performance and Demographic Fairness of Clinical Language Models

Jones, Precious, Liu, Weisi, Huang, I-Chan, Huang, Xiaolei

arXiv.org Artificial Intelligence

Data imbalance is a fundamental challenge in applying language models to biomedical applications, particularly in ICD code prediction tasks where label and demographic distributions are uneven. While state-of-the-art language models have been increasingly adopted in biomedical tasks, few studies have systematically examined how data imbalance affects model performance and fairness across demographic groups. This study fills the gap by statistically probing the relationship between data imbalance and model performance in ICD code prediction. We analyze imbalances in a standard benchmark data across gender, age, ethnicity, and social determinants of health by state-of-the-art biomedical language models. By deploying diverse performance metrics and statistical analyses, we explore the influence of data imbalance on performance variations and demographic fairness. Our study shows that data imbalance significantly impacts model performance and fairness, but feature similarity to the majority class may be a more critical factor. We believe this study provides valuable insights for developing more equitable and robust language models in healthcare applications.


AI-driven innovation in medicaid: enhancing access, cost efficiency, and population health management

Ingole, Balaji Shesharao, Ramineni, Vishnu, Krishnappa, Manjunatha Sughaturu, Jayaram, Vivekananda

arXiv.org Artificial Intelligence

Medicaid is a federal-state program that provides healthcare to over 80 million low-income Americans, including pregnant women, children, and individuals with disabilities. Up against a host of problems, including rising healthcare costs, disparity in access, and the management of chronic conditions among at-risk groups, Medicaid is one of the biggest healthcare payers in the U.S. Just as Medicare does, the use of Artificial Intelligence (AI) offers a major opportunity to change the delivery of care and operational efficiency in Medicaid [1] [16]. While there has been extensive conversation about AI in Medicare, the unique population and requirements of Medicaid require customized AI applications [1]. Chronic disease management, improving admin tasks, and a reduction in costs are amongst the ways AI tools can help, especially by focusing on social determinants of health (SDOH) that are important for Medicaid populations. The study will assess the ability of AI-enabled systems to reinforce Medicaid in handling its particular challenges while facilitating fair and quality care for its entire population of beneficiaries [8] [9].


Google flexes its health care AI muscle

#artificialintelligence

Google showed off an array of new artificial intelligence (AI)-driven health care tools on Tuesday, from a souped-up chatbot that can shed light on your medical symptoms to enhanced search features that tell you if a doctor takes Medicaid. Why it matters: There's an arms race among big tech companies to infuse their products with AI -- but the results, particularly in health care, can have unwanted consequences or pitfalls, like racial bias, privacy concerns and ethical problems. Driving the news: The "large language model" that Google has been building for the medical world -- an AI chatbot called Med-PaLM 2 -- now consistently passes medical exam questions with a score of 85%, placing it at "expert" doctor level, the company said. Yes, but: Google acknowledges AI's shortcomings in the medical realm. Meanwhile: Google's conversational AI technology Duplex has called hundreds of thousands of U.S. health care providers to see if they accept Medicaid.


What it will take to weed out AI bias in healthcare

#artificialintelligence

Artificial intelligence is being used across the healthcare industry with the goal of delivering care more efficiently and improving outcomes for patients. But if health systems and vendors aren't careful, AI has the potential to support biased decision-making and make equities even worse. "Algorithmic bias really is the application of an algorithm that compounds existing inequity," Sarah Awan, equity fellow with CEO Action for Racial Equity and senior manager at PwC, said in a seminar hosted by the Digital Medicine Society and the Consumer Technology Association. "And that might be in socioeconomic status, race and ethnic background, religion, gender, disability, sexual orientation, etc. So while AI can help identify bias and reduce human bias, it really also has the power for bias at scale in very sensitive applications." Healthcare is behind other industries when it comes to using data analytics, said Milissa Campbell, managing director and health insights lead at NTT DATA Services.


How some states are trying to upgrade their glitchy, outdated health care technology

NPR Technology

In October, when Jamie Taylor's household monthly income fit within new state income limits after Missouri's 2021 expansion of Medicaid, she applied for health coverage. She received a rejection letter within days, stating that her earnings exceeded the acceptable limit. It was the latest blow in Taylor's ongoing campaign to get assistance from Missouri's safety net. Taylor, 41, has spent hours on the phone, enduring four-hour hold times and dropped calls. Time-sensitive documents were mailed to her home in Sikeston but by the time they arrived she had little time to act.


Who Increases Emergency Department Use? New Insights from the Oregon Health Insurance Experiment

Denteh, Augustine, Liebert, Helge

arXiv.org Machine Learning

We provide new insights into the finding that Medicaid increased emergency department (ED) use from the Oregon experiment. Using nonparametric causal machine learning methods, we find economically meaningful treatment effect heterogeneity in the impact of Medicaid coverage on ED use. The effect distribution is widely dispersed, with significant positive effects concentrated among high-use individuals. A small group - about 14% of participants - in the right tail with significant increases in ED use drives the overall effect. The remainder of the individualized treatment effects is either indistinguishable from zero or negative. The average treatment effect is not representative of the individualized treatment effect for most people. We identify four priority groups with large and statistically significant increases in ED use - men, prior SNAP participants, adults less than 50 years old, and those with pre-lottery ED use classified as primary care treatable. Our results point to an essential role of intensive margin effects - Medicaid increases utilization among those already accustomed to ED use and who use the emergency department for all types of care. We leverage the heterogeneous effects to estimate optimal assignment rules to prioritize insurance applications in similar expansions.


What Robots Can--and Can't--Do for the Old and Lonely

The New Yorker

It felt good to love again, in that big empty house. Virginia Kellner got the cat last November, around her ninety-second birthday, and now it's always nearby. It keeps her company as she moves, bent over her walker, from the couch to the bathroom and back again. The walker has a pair of orange scissors hanging from the handlebar, for opening mail. Virginia likes the pet's green eyes.


FairLens: Auditing Black-box Clinical Decision Support Systems

Panigutti, Cecilia, Perotti, Alan, Panisson, Andrè, Bajardi, Paolo, Pedreschi, Dino

arXiv.org Artificial Intelligence

The pervasive application of algorithmic decision-making is raising concerns on the risk of unintended bias in AI systems deployed in critical settings such as healthcare. The detection and mitigation of biased models is a very delicate task which should be tackled with care and involving domain experts in the loop. In this paper we introduce FairLens, a methodology for discovering and explaining biases. We show how our tool can be used to audit a fictional commercial black-box model acting as a clinical decision support system. In this scenario, the healthcare facility experts can use FairLens on their own historical data to discover the model's biases before incorporating it into the clinical decision flow. FairLens first stratifies the available patient data according to attributes such as age, ethnicity, gender and insurance; it then assesses the model performance on such subgroups of patients identifying those in need of expert evaluation. Finally, building on recent state-of-the-art XAI (eXplainable Artificial Intelligence) techniques, FairLens explains which elements in patients' clinical history drive the model error in the selected subgroup. Therefore, FairLens allows experts to investigate whether to trust the model and to spotlight group-specific biases that might constitute potential fairness issues.