Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-Making

Schmude, Timothée, Koesten, Laura, Möller, Torsten, Tschiatschek, Sebastian

arXiv.org Artificial Intelligence 

Well-known examples of such "high-risk" [6] systems can be found in recidivism prediction [5], refugee resettlement [3], and public employment [19]. Many authors have outlined that faulty or biased predictions by ADM systems can have far-reaching consequences, including discrimination [5], inaccurate predictions [4], and overreliance on automated decisions [2]. Therefore, high-level guidelines are meant to prevent these issues by pointing out ways to develop trustworthy and ethical AI [10, 22]. However, practically applying these guidelines remains challenging, since the meaning and priority of ethical values shift depending on who is asked [11]. Recent work in Explainable Artificial Intelligence (XAI) thus suggests equipping individuals who are involved with an ADM system and carry responsibility--so-called "stakeholders"--with the means of assessing the system themselves, i.e. enabling users, deployers, and affected individuals to independently check the system's ethical values [14]. Arguably, a pronounced understanding of the system is necessary for making such an assessment. While numerous XAI studies have examined how explaining an ADM system can increase stakeholders' understanding [20, 21], we highlight two aspects that remain an open challenge: i) the amounts of resources needed to produce and test domain-specific explanations and ii) the difficulty of creating and evaluating understanding for a large variety of people. Further, it is important to note that, despite our reference to "Explainable AI," ADM is not constrained to AI, and indeed might encompass a broader problem space. Despite the emphasis on "understanding" in XAI research, the field features only a few studies that introduce learning frameworks from other disciplines.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found