Goto

Collaborating Authors

 obermeyer


Smarter health: How AI is transforming health care

#artificialintelligence

This is the first episode in our series Smarter health. American health care is complex. In the first episode in our series Smarter health, we explore the potential of AI in health care -- from predicting patient risk, to diagnostics, to just helping physicians make better decisions. Today, On Point: We consider whether AI's potential can be realized in our financially-motivated health care system. Welcome to an On Point special series: Smarter health: Artificial intelligence and the future of American health care. In the not so distant future, artificial intelligence and machine learning technologies could transform the health care you receive, whether you're aware of it or not. Here are just a couple of examples. Dr. Vindell Washington is chief clinical officer at Verily Life Sciences, which is owned by Google's parent company, Alphabet. Washington oversees the development of Onduo. Technology that weaves together multiple streams of complex, daily medical data in order to guide and personalize health care decisions across entire patient populations. VINDELL WASHINGTON [Tape]: You might have a blood pressure cuff reading, you may have a blood sugar reading, you may have some logging that you've done.


Artificial Intelligence May Change Racial Inequality in Healthcare by Eliminating Discrimination in Patients

#artificialintelligence

When a patient is rushed to the hospital, doctors would ask them to rate how much pain they are feeling on a scale of 1 to 10. However, pain tolerance is subjective and it can make it difficult for doctors to know why someone's pain is worse than the pain that other patients are feeling. According to a study published in Nature Medicine, researchers used AI techniques in order to analyze knee X-rays to predict the experienced pain of the patients, especially for those who are suffering from osteoarthritis of the knee. The study involved 36,369 observations gathered from 4,172 patients. The computer analysis could pick up things that a radiologist might not record.


New Algorithms Could Reduce Racial Disparities in Health Care

WIRED

Researchers trying to improve healthcare with artificial intelligence usually subject their algorithms to a form of machine med school. Software learns from doctors by digesting thousands or millions of x-rays or other data labeled by expert humans until it can accurately flag suspect moles or lungs showing signs of Covid-19 by itself. A study published this month took a different approach--training algorithms to read knee x-rays for arthritis by using patients as the AI arbiters of truth instead of doctors. The results revealed radiologists may have literal blind spots when it comes to reading Black patients' x-rays. The algorithms trained on patients' reports did a better job than doctors at accounting for the pain experienced by Black patients, apparently by discovering patterns of disease in the images that humans usually overlook.


Data can be a 'force for evil,' AI and machine learning experts say

#artificialintelligence

The COVID-19 pandemic has highlighted and exacerbated existing disparities in the healthcare system, including the consequences of bias on racialized or marginalized groups. Some of the ways racial bias in the healthcare system emerge are more obvious, such as horror stories of Black people being turned away at emergency departments. Others, experts said during the HIMSS Machine Learning and AI for Healthcare Digital Summit this week, are less visible – but can still be incredibly harmful. "There are other ways this bias manifests structurally that are not as potentially sort of obvious," said Kadija Ferryman, industry assistant professor of ethics and engineering, NYU Tandon School of Engineering, at a panel on Tuesday. "That is through informatics and data."


Is Artificial Intelligence (AI) medicine racially biased?

#artificialintelligence

The power of artificial intelligence has transformed health care by using massive datasets to improve diagnostics, treatment, records management, and patient outcomes. Complex decisions that once took hours -- such as making a breast or lung cancer diagnosis based on imaging studies, or deciding when patients should be discharged -- are now resolved within seconds by machine learning and deep learning applications. Any technology, of course, will have its limitations and flaws. And over the past few years, a steady stream of evidence has demonstrated that some of these AI-powered medical technologies are replicating racial bias and exacerbating historic health care inequities. Now, amid the SARS-CoV-2 pandemic, some researchers are asking whether these new technologies might be contributing to the disproportionately high rates of virus-related illness and death among African Americans. African Americans aged 35 to 44 experience Covid-19 mortality rates that are nine times higher than their White counterparts.


AI systems trained on data skewed by sex are worse at diagnosing disease

#artificialintelligence

The artificial intelligence model showed great promise in predicting which patients treated in U.S. Veterans Affairs hospitals would experience a sudden decline in kidney function. But it also came with a crucial caveat: Women represented only about 6% of the patients whose data were used to train the algorithm, and it performed worse when tested on women. The shortcomings of that high-profile algorithm, built by the Google sister company DeepMind, highlight a problem that machine learning researchers working in medicine are increasingly worried about. And it's an issue that may be more pervasive -- and more insidious -- than experts previously realized, new research suggests. The study, led by researchers in Argentina and published Monday in the journal PNAS, found that when female patients were excluded from or significantly underrepresented in the training data used to develop a machine learning model, the algorithm performed worse in diagnosing them when tested across across a wide range of medical conditions affecting the chest area.


Racial Bias Found in a Major Health Care Risk Algorithm

#artificialintelligence

As organizations increasingly replace human decision-making with algorithms, they may assume these computer programs lack our biases. But algorithms still reflect the real world, which means they can unintentionally perpetuate existing inequality. A study published Thursday in Science has found that a health care risk-prediction algorithm, a major example of tools used on more than 200 million people in the U.S., demonstrated racial bias--because it relied on a faulty metric for determining need. This particular algorithm helps hospitals and insurance companies identify which patients will benefit from "high-risk care management" programs, which provide chronically ill people with access to specially trained nursing staff and allocate extra primary-care visits for closer monitoring. By singling out sicker patients for more organized and specific attention, these programs aim to preemptively stave off serious complications, reducing costs and increasing patient satisfaction.


A biased algorithm is delaying healthcare for black people in the US

New Scientist

Black people in the US may be missing out on healthcare because a widely used algorithm is racially biased. The proportion of black people referred for extra care would more than double if the bias were removed, according to new research. Algorithms are fast becoming a key part of healthcare. Such technologies are used to screen somewhere between 100 and 200 million people in the US, says Ziad Obermeyer at the University of California, Berkeley. One example is an algorithm that is used to predict the future health of individuals based on their past health records.


Harvard researchers: 'Absurdly outdated' medical education needs more emphasis on analytics

@machinelearnbot

Love them or hate them, computers are becoming more ingrained in 21st century medical care. And in an increasingly data-driven industry, medical education hasn't kept pace. Physicians might curse their computers for sucking time away from patients or turning them into "data entry clerks," but computers aren't to blame, according to two health policy researchers from Harvard Medical School. As algorithms gradually outperform the human mind, clinicians need to place more emphasis on data science to get the most out of advanced analytics and machine learning that could have a significant impact on medical care care. "Today's medical education system is ill-prepared to meet these needs," Ziad Obermeyer, M.D., and Thomas H. Lee, M.D., who also serves as chief medical officer at Press Ganey, wrote in the New England Journal of Medicine.


How machine learning could revolutionize medicine

#artificialintelligence

Doctors will one day be able to more accurately predict how long patients with fatal diseases will live. Medical systems will learn how to save money by skipping expensive and unnecessary tests. Radiologists will be replaced by computer algorithms. These are just some of the realities patients and doctors should prepare for as "machine learning" enters the world of medicine, according to Dr. Ziad Obermeyer, an assistant professor at Harvard Medical School, and Dr. Ezekiel Emanuel of the University of Pennsylvania, who recently coauthored an article in the New England Journal of Medicine on the topic. But what exactly is "machine learning"?