adm system
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
State, Laura, Colmenarejo, Alejandra Bringas, Beretta, Andrea, Ruggieri, Salvatore, Turini, Franco, Law, Stephanie
Explainable AI (XAI) provides methods to understand non-interpretable machine learning models. However, we have little knowledge about what legal experts expect from these explanations, including their legal compliance with, and value against European Union legislation. To close this gap, we present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI, with a specific focus on the European General Data Protection Regulation. The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain. We extract both a set of hierarchical and interconnected codes using grounded theory, and present the standpoints of the participating experts towards XAI. We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject. Finally, we present a set of recommendations for developers of XAI methods, and indications of legal areas of discussion. Among others, recommendations address the presentation, choice, and content of an explanation, technical risks as well as the end-user, while we provide legal pointers to the contestability of explanations, transparency thresholds, intellectual property rights as well as the relationship between involved parties.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Overview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.48)
Hacking a surrogate model approach to XAI
Wilhelm, Alexander, Zweig, Katharina A.
In recent years, the number of new applications for highly complex AI systems has risen significantly. Algorithmic decision-making systems (ADMs) are one of such applications, where an AI system replaces the decision-making process of a human expert. As one approach to ensure fairness and transparency of such systems, explainable AI (XAI) has become more important. One variant to achieve explainability are surrogate models, i.e., the idea to train a new simpler machine learning model based on the input-output-relationship of a black box model. The simpler machine learning model could, for example, be a decision tree, which is thought to be intuitively understandable by humans. However, there is not much insight into how well the surrogate model approximates the black box. Our main assumption is that a good surrogate model approach should be able to bring such a discriminating behavior to the attention of humans; prior to our research we assumed that a surrogate decision tree would identify such a pattern on one of its first levels. However, in this article we show that even if the discriminated subgroup - while otherwise being the same in all categories - does not get a single positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted by the operator of the system. We then generalize this finding to pinpoint the exact level of the tree on which the discriminating question is asked and show that in a more realistic scenario, where discrimination only occurs to some fraction of the disadvantaged group, it is even more feasible to hide such discrimination. Our approach can be generalized easily to other surrogate models.
- North America > United States > New York (0.04)
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
Everything, Everywhere All in One Evaluation: Using Multiverse Analysis to Evaluate the Influence of Model Design Decisions on Algorithmic Fairness
Simson, Jan, Pfisterer, Florian, Kern, Christoph
A vast number of systems across the world use algorithmic decision making (ADM) to (partially) automate decisions that have previously been made by humans. When designed well, these systems promise more objective decisions while saving large amounts of resources and freeing up human time. However, when ADM systems are not designed well, they can lead to unfair decisions which discriminate against societal groups. The downstream effects of ADMs critically depend on the decisions made during the systems' design and implementation, as biases in data can be mitigated or reinforced along the modeling pipeline. Many of these design decisions are made implicitly, without knowing exactly how they will influence the final system. It is therefore important to make explicit the decisions made during the design of ADM systems and understand how these decisions affect the fairness of the resulting system. To study this issue, we draw on insights from the field of psychology and introduce the method of multiverse analysis for algorithmic fairness. In our proposed method, we turn implicit design decisions into explicit ones and demonstrate their fairness implications. By combining decisions, we create a grid of all possible "universes" of decision combinations. For each of these universes, we compute metrics of fairness and performance. Using the resulting dataset, one can see how and which decisions impact fairness. We demonstrate how multiverse analyses can be used to better understand variability and robustness of algorithmic fairness using an exemplary case study of predicting public health coverage of vulnerable populations for potential interventions. Our results illustrate how decisions during the design of a machine learning system can have surprising effects on its fairness and how to detect these effects using multiverse analysis.
- Europe > Austria > Vienna (0.14)
- North America > United States > Alaska (0.04)
- North America > United States > New York (0.04)
- (2 more...)
The flawed algorithm at the heart of Robodebt
Australia's Royal Commission into the Robodebt Scheme has published its findings. Various unnamed individuals are referred for potential civil or criminal investigation, but its publication is a timely reminder of the potential dangers presented by automated decision-making systems, and how the best way to mitigate their risks is by instilling a strong culture of ethics and systems for accountability in our institutions. The so-called Robodebt scheme was touted to save billions of dollars by using automation and algorithms to identify welfare fraud and overpayments. But in the end, it serves as a salient lesson in the dangers of replacing human oversight and judgement with automated decision-making. It reminds us that the basic method was not merely flawed but illegal; it was premised on the false belief of treating welfare recipients as cheats (rather than as society's most vulnerable); and it lacked both transparency and oversight.
- Government (1.00)
- Law > Criminal Law (0.35)
Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-Making
Schmude, Timothée, Koesten, Laura, Möller, Torsten, Tschiatschek, Sebastian
Well-known examples of such "high-risk" [6] systems can be found in recidivism prediction [5], refugee resettlement [3], and public employment [19]. Many authors have outlined that faulty or biased predictions by ADM systems can have far-reaching consequences, including discrimination [5], inaccurate predictions [4], and overreliance on automated decisions [2]. Therefore, high-level guidelines are meant to prevent these issues by pointing out ways to develop trustworthy and ethical AI [10, 22]. However, practically applying these guidelines remains challenging, since the meaning and priority of ethical values shift depending on who is asked [11]. Recent work in Explainable Artificial Intelligence (XAI) thus suggests equipping individuals who are involved with an ADM system and carry responsibility--so-called "stakeholders"--with the means of assessing the system themselves, i.e. enabling users, deployers, and affected individuals to independently check the system's ethical values [14]. Arguably, a pronounced understanding of the system is necessary for making such an assessment. While numerous XAI studies have examined how explaining an ADM system can increase stakeholders' understanding [20, 21], we highlight two aspects that remain an open challenge: i) the amounts of resources needed to produce and test domain-specific explanations and ii) the difficulty of creating and evaluating understanding for a large variety of people. Further, it is important to note that, despite our reference to "Explainable AI," ADM is not constrained to AI, and indeed might encompass a broader problem space. Despite the emphasis on "understanding" in XAI research, the field features only a few studies that introduce learning frameworks from other disciplines.
- Government (0.92)
- Education (0.71)
Uncertainty-aware predictive modeling for fair data-driven decisions
Kaiser, Patrick, Kern, Christoph, Rügamer, David
Both industry and academia have made considerable progress in developing trustworthy and responsible machine learning (ML) systems. While critical concepts like fairness and explainability are often addressed, the safety of systems is typically not sufficiently taken into account. By viewing data-driven decision systems as socio-technical systems, we draw on the uncertainty in ML literature to show how fairML systems can also be safeML systems. We posit that a fair model needs to be an uncertainty-aware model, e.g. by drawing on distributional regression. For fair decisions, we argue that a safe fail option should be used for individuals with uncertain categorization. We introduce semi-structured deep distributional regression as a modeling framework which addresses multiple concerns brought against standard ML models and show its use in a real-world example of algorithmic profiling of job seekers.
- North America > United States > California (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Law (1.00)
- Banking & Finance > Economy (0.32)
Pandemic exploited to normalise mass surveillance, watchdog warns
The COVID-19 pandemic was exploited as an excuse to further normalise surveillance and monitor an increasing number of daily activities of people around the world under the guise of public health, a tech watchdog warned on Thursday. Since the onset of the coronavirus crisis, a slew of automated decision-making (ADM) systems were adopted in haste and with almost no transparency, no adequate safeguards, and insufficient democratic debate, according to AlgorithmWatch, a non-profit that tracks ADM systems and their impacts on society. Its "Tracing the Tracers" project in a new report includes the findings of a yearlong monitoring of the implementation of the ADM systems – including systems based on artificial intelligence (AI) – in Europe and beyond. The Berlin, Germany-based group warns that the situation regarding ADM systems is even worse than before the COVID-19 pandemic began because they now include potentially life-saving tools. Some examples of ADM systems include digital contact tracing (DCT) apps and digital COVID certificates (DCC).
- Europe > Germany > Berlin (0.26)
- North America > United States (0.06)
- Asia > China (0.06)
- Health & Medicine > Epidemiology (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.99)
- Health & Medicine > Therapeutic Area > Immunology (0.99)
Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review of the Empirical Literature
Starke, Christopher, Baleis, Janine, Keller, Birte, Marcinkowski, Frank
Algorithmic decision-making (ADM) increasingly shapes people's daily lives. Given that such autonomous systems can cause severe harm to individuals and social groups, fairness concerns have arisen. A human-centric approach demanded by scholars and policymakers requires taking people's fairness perceptions into account when designing and implementing ADM. We provide a comprehensive, systematic literature review synthesizing the existing empirical insights on perceptions of algorithmic fairness from 39 empirical studies spanning multiple domains and scientific disciplines. Through thorough coding, we systemize the current empirical literature along four dimensions: (a) algorithmic predictors, (b) human predictors, (c) comparative effects (human decision-making vs. algorithmic decision-making), and (d) consequences of ADM. While we identify much heterogeneity around the theoretical concepts and empirical measurements of algorithmic fairness, the insights come almost exclusively from Western-democratic contexts. By advocating for more interdisciplinary research adopting a society-in-the-loop framework, we hope our work will contribute to fairer and more responsible ADM.
- Europe > Germany > North Rhine-Westphalia > Düsseldorf Region > Düsseldorf (0.14)
- Europe > Netherlands (0.04)
- Asia > South Korea (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- (2 more...)
AI and automation vs. the COVID-19 pandemic: Trading liberty for safety
Digital technologies have been touted as a solution to the COVID-19 outbreak since early in the pandemic. AlgorithmWatch, a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes, just published a report on Automated Decision-Making Systems in the COVID-19 Pandemic, examining the use of technology to respond to COVID-19. From cancelled conferences to disrupted supply chains, not a corner of the global economy is immune to the spread of COVID-19. The report has a European lens, as AlgorithmWatch focuses on the use of digital technology in the EU. Its findings, however, are interesting and applicable regardless of geographies, as they refer to the same underlying principles and technologies.
- Europe > Germany (0.05)
- Europe > Estonia (0.05)
- North America > United States > Michigan (0.05)
- (22 more...)
Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges
Machine learning algorithms are now frequently used in sensitive contexts that substantially affect the course of human lives, such as credit lending or criminal justice. This is driven by the idea that `objective' machines base their decisions solely on facts and remain unaffected by human cognitive biases, discriminatory tendencies or emotions. Yet, there is overwhelming evidence showing that algorithms can inherit or even perpetuate human biases in their decision making when they are based on data that contains biased human decisions. This has led to a call for fairness-aware machine learning. However, fairness is a complex concept which is also reflected in the attempts to formalize fairness for algorithmic decision making. Statistical formalizations of fairness lead to a long list of criteria that are each flawed (or harmful even) in different contexts. Moreover, inherent tradeoffs in these criteria make it impossible to unify them in one general framework. Thus, fairness constraints in algorithms have to be specific to the domains to which the algorithms are applied. In the future, research in algorithmic decision making systems should be aware of data and developer biases and add a focus on transparency to facilitate regular fairness audits.
- Europe > Spain > Andalusia > Seville Province > Seville (0.04)
- North America > United States > North Carolina (0.04)
- North America > United States > New York (0.04)
- (2 more...)
- Information Technology > Security & Privacy (0.93)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.93)
- Law > Criminal Law (0.89)
- Government > Regional Government > Europe Government (0.68)