Anticipating Gaming to Incentivize Improvement: Guiding Agents in (Fair) Strategic Classification

Alhanouti, Sura, Naghizadeh, Parinaz

arXiv.org Artificial Intelligence 

While the use of MLdriven systems can enhance efficiency, it can also drive the humans who are subject to algorithmic decisions to adjust their behavior accordingly. Examples include Uber drivers coordinating their behavior in response to its surge pricing algorithm [Möhlmann and Zalmanson, 2017], applicants selecting keywords and formatting to pass automated resume screening [Forbes, 2022], and Facebook users adjusting their posting and content interaction choices in response to the platforms' curation algorithms [Eslami et al., 2016]. These can be viewed as strategic responses by rational human subjects in these systems, motivating a game-theoretical analysis of learning algorithms with human in the loop. Earlier works on the study of strategic humans facing ML systems largely focused on scenarios where users can strategically alter only their observable data (e.g., students cheating to obtain better test scores, job applicants making formatting or wording changes to their CV, or loan applicants opening several new accounts to increase their credit scores) to receive a favorable decision (e.g., be accepted to a school, job opening, or loan); see, e.g., [Hu et al., 2019, Milli et al., 2019]. This strategic behavior is referred to as strategic manipulation, where agents change their features without changing their true qualification states. This can be interpreted as cheating the machine learning algorithm: such agents may appear to be more qualified, without being truly suitable for a favorable outcome.