Incentive-Aware Machine Learning; Robustness, Fairness, Improvement & Causality

Podimata, Chara

arXiv.org Artificial Intelligence 

Machine Learning (ML) algorithms are deeply embedded in var ious aspects of modern life, influencing everything from enhancing daily conveniences and sh aping online purchasing behavior to making critical decisions in areas such as hiring, loan appr ovals, college admissions, and probation rulings. Given the high stakes of these decisions, individu als often have strong incentives to strategically modify the data they provide to these algorithms to s ecure more favorable outcomes. For instance, individuals might open additional credit accoun ts or take other steps to improve their credit scores before applying for a loan. In the context of co llege admissions, applicants may retake standardized tests like the GRE, enroll in test preparation courses, or even switch schools to boost their class rankings, all in efforts to present themselves as m ore competitive candidates. Such instances of "strategic adaptation" have been extensi vely documented across disciplines including Economics, CS, and Public Policy Bj orkegren et al. [ 2020 ], Dee et al. [ 2019 ], Dranove et al. [ 2003 ], Greenstone et al. [ 2022 ], Gonzalez-Lira and Mobarak [ 2019 ], Chang et al. [ 2024 ]. The challenge arises when decision-makers deploying ML algorithms fail to account for these adaptations, potentially undermining the original goals of the policies the algorithms are intended to support. For example, in college admissions, a student's decision to change schools solely to improve their class ranking may not necessarily reflect a substantive impr ovement in their qualifications. This literature review was recently published in SIGEcom Ex changes.