Goto

Collaborating Authors

 qualification rate




HowDoFairDecisionsFare inLong-termQualification?

Neural Information Processing Systems

We examine whether these static fairness constraints mitigate or worsen the qualification disparity in the long-run. Our work can be applied to a variety of applications such as recruitment and bank lending. In these applications, aninstitute observesindividuals' features (e.g., credit scores), and makes myopic decisions(e.g., issue loans) by assessing such features against some variables of interest (e.g., ability torepay) which are unknown and unobservable tothe institute when making decisions.





Automating Data Annotation under Strategic Human Agents: Risks and Potential Solutions

Xie, Tian, Zhang, Xueru

arXiv.org Artificial Intelligence

As machine learning (ML) models are increasingly used in social domains to make consequential decisions about humans, they often have the power to reshape data distributions. Humans, as strategic agents, continuously adapt their behaviors in response to the learning system. As populations change dynamically, ML systems may need frequent updates to ensure high performance. However, acquiring high-quality human-annotated samples can be highly challenging and even infeasible in social domains. A common practice to address this issue is using the model itself to annotate unlabeled data samples. This paper investigates the long-term impacts when ML models are retrained with model-annotated samples when they incorporate human strategic responses. We first formalize the interactions between strategic agents and the model and then analyze how they evolve under such dynamic interactions. We find that agents are increasingly likely to receive positive decisions as the model gets retrained, whereas the proportion of agents with positive labels may decrease over time. We thus propose a refined retraining process to stabilize the dynamics. Last, we examine how algorithmic fairness can be affected by these retraining processes and find that enforcing common fairness constraints at every round may not benefit the disadvantaged group in the long run. Experiments on (semi-)synthetic and real data validate the theoretical findings.


Long-Term Fairness with Unknown Dynamics

Yin, Tongxin, Raab, Reilly, Liu, Mingyan, Liu, Yang

arXiv.org Artificial Intelligence

As machine learning (ML) algorithms are deployed for tasks with real-world social consequences (e.g., school admissions, loan approval, medical interventions, etc.), the possibility exists for runaway social inequalities (Crawford and Calo, 2016; Chaney et al., 2018; Fuster et al., 2018; Ensign et al., 2018). While "fairness" has become a salient ethical concern in contemporary research, the closed-loop dynamics of real-world systems comprising ML policies and populations that mutually adapt to each other (Figure 1 in the supplementary material) remain poorly understood. In this paper, our primary contribution is to consider the problem of long-term fairness, or algorithmic fairness in the context of a dynamically responsive population, as a reinforcement learning (RL) problem subject to constraint. In our formulation, the central learning task is to develop a policy that minimizes cumulative loss (e.g., financial risk, negative educational outcomes, misdiagnoses, etc.) incurred by an ML agent interacting with a human population up to a finite time horizon, subject to constraints on cumulative "violations of fairness", which we refer to in a single time step as disparity and cumulatively as distortion.


Negative feedback loops: Using an economic model to inspect bias in AI

#artificialintelligence

Is bias in AI self-reinforcing? Decision-making systems that impact criminal justice, financial institutions, human resources, and many other areas often have bias. This is especially true of algorithmic systems that learn from historical data, which tends to reflect existing societal biases. In many high-stakes applications, like hiring and lending, these decision-making systems may even reshape the underlying populations. When the system is retrained on future data, it may become not less but more detrimental to historically disadvantaged groups.