Counterfactual Predictions under Runtime Confounding
Coston, Amanda, Kennedy, Edward H., Chouldechova, Alexandra
Algorithmic tools are increasingly prevalent in domains such as health care, education, lending, criminal justice, and child welfare [2, 7, 12, 15, 30]. In many cases, the tools are not intended to replace human decision-making, but rather to distill rich case information into a simpler form, such as a risk score, to inform human decision makers [1, 9]. The type of information that these tools need to convey is often counterfactual in nature. Decision-makers need to know what is likely to happen if they choose to take a particular action. For instance, an undergraduate program advisor determining which students to recommend for a personalized case management program might wish to know the likelihood that a given student will graduate if enrolled in the program. In criminal justice, a parole board determining whether to release an offender may wish to know the likelihood that the offender will succeed on parole under different possible levels of supervision intensity. A common challenge to developing valid counterfactual prediction models is that all the data available for training and evaluation is observational: the data reflects historical decisions and outcomes under those decisions rather than randomized trials intended to assess outcomes under different policies. If the data is confounded--that is, if there are factors not captured in the data that influenced both the outcome of interest and historical decisions--valid counterfactual prediction may not be possible.
Jun-30-2020
- Genre:
- Research Report
- Experimental Study (0.66)
- New Finding (0.46)
- Research Report
- Industry:
- Health & Medicine (1.00)
- Law > Criminal Law (0.54)
- Technology: