Schoeffer, Jakob
A Critical Survey on Fairness Benefits of XAI
Deck, Luca, Schoeffer, Jakob, De-Arteaga, Maria, Kühl, Niklas
In this critical survey, we analyze typical claims on the relationship between explainable AI (XAI) and fairness to disentangle the multidimensional relationship between these two concepts. Based on a systematic literature review and a subsequent qualitative content analysis, we identify seven archetypal claims from 175 papers on the alleged fairness benefits of XAI. We present crucial caveats with respect to these claims and provide an entry point for future discussions around the potentials and limitations of XAI for specific fairness desiderata. Importantly, we notice that claims are often (i) vague and simplistic, (ii) lacking normative grounding, or (iii) poorly aligned with the actual capabilities of XAI. We encourage to conceive XAI not as an ethical panacea but as one of many tools to approach the multidimensional, sociotechnical challenge of algorithmic fairness. Moreover, when making a claim about XAI and fairness, we emphasize the need to be more specific about what kind of XAI method is used and which fairness desideratum it refers to, how exactly it enables fairness, and who is the stakeholder that benefits from XAI.
On the Interdependence of Reliance Behavior and Accuracy in AI-Assisted Decision-Making
Schoeffer, Jakob, Jakubik, Johannes, Voessing, Michael, Kuehl, Niklas, Satzger, Gerhard
In AI-assisted decision-making, a central promise of putting a human in the loop is that they should be able to complement the AI system by adhering to its correct and overriding its mistaken recommendations. In practice, however, we often see that humans tend to over- or under-rely on AI recommendations, meaning that they either adhere to wrong or override correct recommendations. Such reliance behavior is detrimental to decision-making accuracy. In this work, we articulate and analyze the interdependence between reliance behavior and accuracy in AI-assisted decision-making, which has been largely neglected in prior work. We also propose a visual framework to make this interdependence more tangible. This framework helps us interpret and compare empirical findings, as well as obtain a nuanced understanding of the effects of interventions (e.g., explanations) in AI-assisted decision-making. Finally, we infer several interesting properties from the framework: (i) when humans under-rely on AI recommendations, there may be no possibility for them to complement the AI in terms of decision-making accuracy; (ii) when humans cannot discern correct and wrong AI recommendations, no such improvement can be expected either; (iii) interventions may lead to an increase in decision-making accuracy that is solely driven by an increase in humans' adherence to AI recommendations, without any ability to discern correct and wrong. Our work emphasizes the importance of measuring and reporting both effects on accuracy and reliance behavior when empirically assessing interventions.
On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making
Schoeffer, Jakob, De-Arteaga, Maria, Kuehl, Niklas
Proponents of explainable AI have often argued that it constitutes an essential path towards algorithmic fairness. Prior works examining these claims have primarily evaluated explanations based on their effects on humans' perceptions, but there is scant research on the relationship between explanations and distributive fairness of AI-assisted decisions. In this paper, we conduct an empirical study to examine the relationship between feature-based explanations and distributive fairness, mediated by human perceptions and reliance on AI recommendations. Our findings show that explanations influence fairness perceptions, which, in turn, relate to humans' tendency to adhere to AI recommendations. However, our findings suggest that such explanations do not enable humans to discern correct and wrong AI recommendations. Instead, we show that they may affect reliance irrespective of the correctness of AI recommendations. Depending on which features an explanation highlights, this can foster or hinder distributive fairness: when explanations highlight features that are task-irrelevant and evidently associated with the sensitive attribute, this prompts overrides that counter stereotype-aligned AI recommendations. Meanwhile, if explanations appear task-relevant, this induces reliance behavior that reinforces stereotype-aligned errors. These results show that feature-based explanations are not a reliable mechanism to improve distributive fairness, as their ability to do so relies on a human-in-the-loop operationalization of the flawed notion of "fairness through unawareness". Finally, our study design provides a blueprint to evaluate the suitability of other explanations as pathways towards improved distributive fairness of AI-assisted decisions.
Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making
Schoeffer, Jakob, Machowski, Yvette, Kuehl, Niklas
Automated decision systems (ADS) have become ubiquitous in many high-stakes domains. Those systems typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly for affected individuals. As a result, ADS are prone to deficient oversight and calibration, which can lead to undesirable (e.g., unfair) outcomes. In this work, we conduct an online study with 200 participants to examine people's perceptions of fairness and trustworthiness towards ADS in comparison to a scenario where a human instead of an ADS makes a high-stakes decision -- and we provide thorough identical explanations regarding decisions in both cases. Surprisingly, we find that people perceive ADS as fairer than human decision-makers. Our analyses also suggest that people's AI literacy affects their perceptions, indicating that people with higher AI literacy favor ADS more strongly over human decision-makers, whereas low-AI-literacy people exhibit no significant differences in their perceptions.
Appropriate Fairness Perceptions? On the Effectiveness of Explanations in Enabling People to Assess the Fairness of Automated Decision Systems
Schoeffer, Jakob, Kuehl, Niklas
It is often argued that one goal of explaining automated decision systems (ADS) is to facilitate positive perceptions (e.g., fairness or trustworthiness) of users towards such systems. This viewpoint, however, makes the implicit assumption that a given ADS is fair and trustworthy, to begin with. If the ADS issues unfair outcomes, then one might expect that explanations regarding the system's workings will reveal its shortcomings and, hence, lead to a decrease in fairness perceptions. Consequently, we suggest that it is more meaningful to evaluate explanations against their effectiveness in enabling people to appropriately assess the quality (e.g., fairness) of an associated ADS. We argue that for an effective explanation, perceptions of fairness should increase if and only if the underlying ADS is fair. In this in-progress work, we introduce the desideratum of appropriate fairness perceptions, propose a novel study design for evaluating it, and outline next steps towards a comprehensive experiment.
A Study on Fairness and Trust Perceptions in Automated Decision Making
Schoeffer, Jakob, Machowski, Yvette, Kuehl, Niklas
Automated decision systems are increasingly used for consequential decision making -- for a variety of reasons. These systems often rely on sophisticated yet opaque models, which do not (or hardly) allow for understanding how or why a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield undesirable (e.g., unfair) outcomes because their sanity is difficult to assess and calibrate in the first place. In this work, we conduct a study to evaluate different attempts of explaining such systems with respect to their effect on people's perceptions of fairness and trustworthiness towards the underlying mechanisms. A pilot study revealed surprising qualitative insights as well as preliminary significant effects, which will have to be verified, extended and thoroughly discussed in the larger main study.
A Ranking Approach to Fair Classification
Schoeffer, Jakob, Kuehl, Niklas, Valera, Isabel
Algorithmic decision systems are increasingly used in areas such as hiring, school admission, or loan approval. Typically, these systems rely on labeled data for training a classification model. However, in many scenarios, ground-truth labels are unavailable, and instead we have only access to imperfect labels as the result of (potentially biased) human-made decisions. Despite being imperfect, historical decisions often contain some useful information on the unobserved true labels. In this paper, we focus on scenarios where only imperfect labels are available and propose a new fair ranking-based decision system, as an alternative to traditional classification algorithms. Our approach is both intuitive and easy to implement, and thus particularly suitable for adoption in real-world settings. More in detail, we introduce a distance-based decision criterion, which incorporates useful information from historical decisions and accounts for unwanted correlation between protected and legitimate features. Through extensive experiments on synthetic and real-world data, we show that our method is fair, as it a) assigns the desirable outcome to the most qualified individuals, and b) removes the effect of stereotypes in decision-making, thereby outperforming traditional classification algorithms. Additionally, we are able to show theoretically that our method is consistent with a prominent concept of individual fairness which states that "similar individuals should be treated similarly."