Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare
–arXiv.org Artificial Intelligence
Bias in applications of machine learning (ML) to healthcare is usually attributed to unrepresentative or incomplete data, or to underlying health disparities. This article identifies a more pervasive source of bias that affects the clinical utility of ML-enabled prediction tools: target specification bias. Target specification bias arises when the operationalization of the target variable does not match its definition by decision makers. The mismatch is often subtle, and stems from the fact that decision makers are typically interested in predicting the outcomes of counterfactual, rather than actual, healthcare scenarios. Target specification bias persists independently of data limitations and health disparities. When left uncorrected, it gives rise to an overestimation of predictive accuracy, to inefficient utilization of medical resources, and to suboptimal decisions that can harm patients. Recent work in metrology - the science of measurement - suggests ways of counteracting target specification bias and avoiding its harmful consequences.
arXiv.org Artificial Intelligence
Aug-3-2023
- Country:
- Europe > United Kingdom (0.04)
- North America
- Canada > Quebec
- Montreal (0.14)
- United States
- Arizona (0.04)
- New York > New York County
- New York City (0.04)
- Oregon (0.04)
- Canada > Quebec
- Genre:
- Research Report (0.50)
- Industry:
- Health & Medicine
- Diagnostic Medicine (1.00)
- Health Care Technology (0.93)
- Therapeutic Area
- Dermatology (0.93)
- Immunology (0.67)
- Infections and Infectious Diseases (0.94)
- Oncology (1.00)
- Health & Medicine
- Technology: