The implicit fairness criterion of unconstrained learning
Liu, Lydia T., Simchowitz, Max, Hardt, Moritz
Although many fairness-promoting interventions have been proposed in the machine learning literature, unconstrainedlearning remains the dominant paradigm among practitioners for learning risk scores from data. Given a prespecified class of models, unconstrained learning simply seeks to minimize the average prediction loss over a labeled dataset, without explicitly correcting for disparity with respect to sensitive attributes, such as race or gender. Many criticize the practice of unconstrained machine learning for propagating harmful biases [Crawford, 2013, Barocas and Selbst, 2016, Crawford, 2017]. Others see merit in unconstrained learning for reducing bias in consequential decisions [Corbett-Davies et al., 2017b,a, Kleinberg et al., 2018]. In this work, we show that defaulting to unconstrained learning does not neglect fairness considerations entirely.Instead, it prioritizes one notion of "fairness" over others: unconstrained learning achieves calibration with respect to one or more sensitive attributes, as well as a related criterion called sufficiency [e.g., Barocas et al., 2018], at the cost of violating other widely used fairness criteria, separation and independence (see Section 1.2 for references therein). A risk score is calibrated for a group if the risk score obviates the need to solicit group membership forthe purpose of predicting an outcome variable of interest. The concept of calibration has a venerable history in statistics and machine learning [Cox, 1958, Murphy and Winkler, 1977, Dawid, 1982, DeGroot and Fienberg, 1983, Platt, 1999, Zadrozny and Elkan, 2001, Niculescu-Mizil and Caruana, 2005]. The appearance of calibration as a widely adopted and discussed "fairness criterion" largelyresulted from a recent debate around fairness in recidivism prediction and pretrial
Jan-25-2019
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Technology: