Collaborating Authors

A Sociotechnical View of Algorithmic Fairness Machine Learning

Algorithmic fairness has been framed as a newly emerging technology that mitigates systemic discrimination in automated decision-making, providing opportunities to improve fairness in information systems (IS). However, based on a state-of-the-art literature review, we argue that fairness is an inherently social concept and that technologies for algorithmic fairness should therefore be approached through a sociotechnical lens. We advance the discourse on algorithmic fairness as a sociotechnical phenomenon. Our research objective is to embed AF in the sociotechnical view of IS. Specifically, we elaborate on why outcomes of a system that uses algorithmic means to assure fairness depends on mutual influences between technical and social structures. This perspective can generate new insights that integrate knowledge from both technical fields and social studies. Further, it spurs new directions for IS debates. We contribute as follows: First, we problematize fundamental assumptions in the current discourse on algorithmic fairness based on a systematic analysis of 310 articles. Second, we respond to these assumptions by theorizing algorithmic fairness as a sociotechnical construct. Third, we propose directions for IS researchers to enhance their impacts by pursuing a unique understanding of sociotechnical algorithmic fairness. We call for and undertake a holistic approach to AF. A sociotechnical perspective on algorithmic fairness can yield holistic solutions to systemic biases and discrimination.

Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review of the Empirical Literature Artificial Intelligence

Algorithmic decision-making (ADM) increasingly shapes people's daily lives. Given that such autonomous systems can cause severe harm to individuals and social groups, fairness concerns have arisen. A human-centric approach demanded by scholars and policymakers requires taking people's fairness perceptions into account when designing and implementing ADM. We provide a comprehensive, systematic literature review synthesizing the existing empirical insights on perceptions of algorithmic fairness from 39 empirical studies spanning multiple domains and scientific disciplines. Through thorough coding, we systemize the current empirical literature along four dimensions: (a) algorithmic predictors, (b) human predictors, (c) comparative effects (human decision-making vs. algorithmic decision-making), and (d) consequences of ADM. While we identify much heterogeneity around the theoretical concepts and empirical measurements of algorithmic fairness, the insights come almost exclusively from Western-democratic contexts. By advocating for more interdisciplinary research adopting a society-in-the-loop framework, we hope our work will contribute to fairer and more responsible ADM.

External Stability Auditing to Test the Validity of Personality Prediction in AI Hiring Artificial Intelligence

Automated hiring systems are among the fastest-developing of all high-stakes AI systems. Among these are algorithmic personality tests that use insights from psychometric testing, and promise to surface personality traits indicative of future success based on job seekers' resumes or social media profiles. We interrogate the validity of such systems using stability of the outputs they produce, noting that reliability is a necessary, but not a sufficient, condition for validity. Our approach is to (a) develop a methodology for an external audit of stability of predictions made by algorithmic personality tests, and (b) instantiate this methodology in an audit of two systems, Humantic AI and Crystal. Crucially, rather than challenging or affirming the assumptions made in psychometric testing -- that personality is a meaningful and measurable construct, and that personality traits are indicative of future success on the job -- we frame our methodology around testing the underlying assumptions made by the vendors of the algorithmic personality tests themselves. In our audit of Humantic AI and Crystal, we find that both systems show substantial instability with respect to key facets of measurement, and so cannot be considered valid testing instruments. For example, Crystal frequently computes different personality scores if the same resume is given in PDF vs. in raw text format, violating the assumption that the output of an algorithmic personality test is stable across job-irrelevant variations in the input. Among other notable findings is evidence of persistent -- and often incorrect -- data linkage by Humantic AI.

Who Decides if AI is Fair? The Labels Problem in Algorithmic Auditing Artificial Intelligence

Labelled "ground truth" datasets are routinely used to evaluate and audit AI algorithms applied in high-stakes settings. However, there do not exist widely accepted benchmarks for the quality of labels in these datasets. We provide empirical evidence that quality of labels can significantly distort the results of algorithmic audits in real-world settings. Using data annotators typically hired by AI firms in India, we show that fidelity of the ground truth data can lead to spurious differences in performance of ASRs between urban and rural populations. After a rigorous, albeit expensive, label cleaning process, these disparities between groups disappear. Our findings highlight how trade-offs between label quality and data annotation costs can complicate algorithmic audits in practice. They also emphasize the need for development of consensus-driven, widely accepted benchmarks for label quality.