Policy Brief

Stanford HAI 

While machine learning applications in healthcare continue to shape patient-care experiences and medical outcomes, discriminatory AI decision-making is concerning. This issue is especially pronounced in a clinical setting, where individuals' well-being and physical safety are on the line, and where medical professionals face life-or-death decisions every day. Until now, the conversation about measuring algorithmic fairness in healthcare has focused on fairness itself--and has not fully taken into account how fairness techniques could impact clinical predictive models, which are often derived from large clinical datasets. This brief seeks to ground this debate in evidence, and suggests the best way forward in developing fairer ML tools for a clinical setting. We studied the trade-offs clinical predictive algorithms face between accuracy and fairness for outcomes like hospital mortality, prolonged stays in the hospital, and 30-day readmissions to the hospital.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found