automated decision
Implications of Model Indeterminacy for Explanations of Automated Decisions
There has been a significant research effort focused on explaining predictive models, for example through post-hoc explainability and recourse methods. Most of the proposed techniques operate upon a single, fixed, predictive model. However, it is well-known that given a dataset and a predictive task, there may be a multiplicity of models that solve the problem (nearly) equally well. In this work, we investigate the implications of this kind of model indeterminacy on the post-hoc explanations of predictive models. We show how it can lead to explanatory multiplicity, and we explore the underlying drivers.
Reviews: Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
This paper proposes a new measure of fairness for classification and regression problems based on welfare considerations rather than inequality considerations. This measure of fairness represents a convex constraint, making it easy to optimize for. They experimentally demonstrate the tradeoffs between this notion of fairness and previous notions. I believe this to be a pretty valuable submission. A welfare-based approach over a inequality-based approach should turn out to be very helpful in addressing all sorts of concerns with the current literature. It also provokes a number of questions to follow up on which, while disappointing that they are not addressed here, means that the community should take interest in this paper.
A Study on Fairness and Trust Perceptions in Automated Decision Making
Schoeffer, Jakob, Machowski, Yvette, Kuehl, Niklas
Automated decision systems are increasingly used for consequential decision making -- for a variety of reasons. These systems often rely on sophisticated yet opaque models, which do not (or hardly) allow for understanding how or why a given decision was arrived at. This is not only problematic from a legal perspective, but non-transparent systems are also prone to yield undesirable (e.g., unfair) outcomes because their sanity is difficult to assess and calibrate in the first place. In this work, we conduct a study to evaluate different attempts of explaining such systems with respect to their effect on people's perceptions of fairness and trustworthiness towards the underlying mechanisms. A pilot study revealed surprising qualitative insights as well as preliminary significant effects, which will have to be verified, extended and thoroughly discussed in the larger main study.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Law (1.00)
- Banking & Finance > Loans (0.48)
- Information Technology > Security & Privacy (0.46)
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
Heidari, Hoda, Ferrari, Claudio, Gummadi, Krishna, Krause, Andreas
We draw attention to an important, yet largely overlooked aspect of evaluating fairness for automated decision making systems---namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al's notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.
Automated Decision Making and the GDPR - Aphaia: Leading experts in ICT regulation and policy
Artificial Intelligence is increasingly becoming ingrained in all facets of our societies and lives. While it certainly heralds an age of cool futuristic technology and applications--facial recognition and self-driving cars for example!--what about when AI is utilized as an automated decision making tool? Can this pose an issue to an individual's right? What are the possible implications? Are there any legal provisions to ensure fairness?
- Law (0.82)
- Information Technology > Security & Privacy (0.62)
Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods
Automated decisions are increasingly part of everyday life, but how can the public scrutinize, understand, and govern them? To begin to explore this, Omidyar Network has, in partnership with Upturn, published Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods. The report is based on an extensive review of computer and social science literature, a broad array of real-world attempts to study automated systems, and dozens of conversations with global digital rights advocates, regulators, technologists, and industry representatives. It maps out the landscape of public scrutiny of automated decision-making, both in terms of what civil society was or was not doing in this nascent sector and what laws and regulations were or were not in place to help regulate it.
Evidence around inequality for APPG AI – Doteveryone – Medium
The following is my presentation, representing Doteveryone, for today's APPG AI evidence session. Artificial intelligence does not in and of itself reduce or create inequality. AI is a tool, and its outcomes are determined by the way we humans use it. Currently, the biggest users and developers of AI are the organisations with access to the most expertise, data and computer hardware. These are largely private sector companies working to solve private sector problems, creating wealth for a few.
Automated decision making shows worrying signs of limitation
Data released by West Midlands Fire Service appears to show the city of Birmingham has too many fire stations, with 15 compared with neighbouring Solihull's two. The service's online map of attendance times shows many parts of Solihull, a suburban and rural area, have to wait much longer for firefighters to arrive. Even on the basis of relative population sizes, Solihull looks under-served. You forgot to provide an Email Address. This email address doesn't appear to be valid.
- Europe > United Kingdom > England > West Midlands (0.25)
- North America > United States > Wisconsin (0.05)
- North America > Canada (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
What Can I Do Now? Guiding Users in a World of Automated Decisions
What Can I Do Now? Guiding Users in a World of Automated Decisions Abstract More and more processes governing our lives use in some part an automatic decision step, where - based on a feature vector derived from an applicant - an algorithm has the decision power over the final outcome. Here we present a simple idea which gives some of the power back to the applicant by providing her with alternatives which would make the decision algorithm decide differently. It is based on a formalization reminiscent of methods used for evasion attacks, and consists in enumerating the subspaces where the classifiers decides the desired output. This has been implemented for the specific case of decision forests (ensemble methods based on decision trees), mapping the problem to an iterative version of enumerating k-cliques. We live in a world where more and more of decision affecting our lives are taken by automatic systems.
- North America > United States > New York (0.04)
- Europe > France (0.04)