Goto

Collaborating Authors

 underprivileged group


Modelling the long-term fairness dynamics of data-driven targeted help on job seekers

Scher, Sebastian, Kopeinik, Simone, Trügler, Andreas, Kowald, Dominik

arXiv.org Artificial Intelligence

The use of data-driven decision support by public agencies is becoming more widespread and already influences the allocation of public resources. This raises ethical concerns, as it has adversely affected minorities and historically discriminated groups. In this paper, we use an approach that combines statistics and data-driven approaches with dynamical modeling to assess long-term fairness effects of labor market interventions. Specifically, we develop and use a model to investigate the impact of decisions caused by a public employment authority that selectively supports job-seekers through targeted help. The selection of who receives what help is based on a data-driven intervention model that estimates an individual's chances of finding a job in a timely manner and rests upon data that describes a population in which skills relevant to the labor market are unevenly distributed between two groups (e.g., males and females). The intervention model has incomplete access to the individual's actual skills and can augment this with knowledge of the individual's group affiliation, thus using a protected attribute to increase predictive accuracy. We assess this intervention model's dynamics -- especially fairness-related issues and trade-offs between different fairness goals -- over time and compare it to an intervention model that does not use group affiliation as a predictive feature. We conclude that in order to quantify the trade-off correctly and to assess the long-term fairness effects of such a system in the real-world, careful modeling of the surrounding labor market is indispensable.


Mitigating Bias in Machine Learning: An introduction to MLFairnessPipeline

#artificialintelligence

Bias takes many different forms and impact all groups of people. It can range from implicit to explicit and is often very difficult to detect. In the field of machine learning bias is often subtle and hard to identify, let alone solve. Why is this a problem? Implicit bias in machine learning has very real consequences including denial of a loan, a lengthier prison sentence, and many other harmful outcomes for underprivileged groups.


Interventions for Ranking in the Presence of Implicit Bias

Celis, L. Elisa, Mehrotra, Anay, Vishnoi, Nisheeth K.

arXiv.org Artificial Intelligence

It is well understood that implicit bias is a factor in adverse effects against subpopulations in many societal contexts [1,6,42] as also highlighted by recent events in the popular press [22,38,61]. For instance, in employment decisions, men are perceived as more competent and given a higher starting salary even when qualifications are the same [52], and in managerial jobs, it was observed that women had to show roughly twice as much evidence of competence as men to be seen as equally competent [37,59]. In education, implicit biases have been shown to exist in ways that exacerbate the achievement gap for racial and ethnic minorities [53] and female students [41], and add to the large racial disparities in school discipline which particularly affect black students' school performance and future prospects [45]. Beyond negatively impacting social opportunities, implicit biases have been shown to put lives at stake as they are a factor in police decisions to shoot, negatively impacting people who are black [20] and of other racial or ethnic minorities [48]. Furthermore, decision making that relies on biased measures of quantities such as utility can not only adversely impact those perceived more negatively, but can also lead to sub-optimal outcomes for those harboring these unconscious biases. To combat this, a significant effort has been placed in developing anti-bias training with the goal of eliminating or reducing implicit biases [24, 39, 64].