ACROCPoLis: A Descriptive Framework for Making Sense of Fairness
Tubella, Andrea Aler, Mollo, Dimitri Coelho, Lindström, Adam Dahlgren, Devinney, Hannah, Dignum, Virginia, Ericson, Petter, Jonsson, Anna, Kampik, Timotheus, Lenaerts, Tom, Mendez, Julian Alfredo, Nieves, Juan Carlos
–arXiv.org Artificial Intelligence
Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and are experienced by individuals and social groups. In this paper, we do this by means of proposing the ACROCPoLis framework to represent allocation processes with a modeling emphasis on fairness aspects. The framework provides a shared vocabulary in which the factors relevant to fairness assessments for different situations and procedures are made explicit, as well as their interrelationships. This enables us to compare analogous situations, to highlight the differences in dissimilar situations, and to capture differing interpretations of the same situation by different stakeholders. CCS Concepts: Computer systems organization Embedded systems; Redundancy; Robotics; Networks Network reliability. INTRODUCTION Fairness is a fundamental aspect of justice, and central to a democratic society [50]. It is therefore unsurprising that justice and fairness are at the core of current discussions about the ethics of the development and use of AI systems. Given that people often associate fairness with consistency and accuracy, the idea that our decisions as well as the decisions affecting us can become fairer by replacing human judgment with automated, numerical systems, is appealing [1, 16, 24]. All authors contributed equally to this research. Authors listed alphabetically Authors' addresses: Andrea Aler Tubella, andrea.aler@umu.se, Nevertheless, current research and journalistic investigations have identified issues with discrimination, bias and lack of fairness in a variety of AI applications [41].
arXiv.org Artificial Intelligence
Apr-19-2023
- Country:
- Europe (0.71)
- North America > United States
- California (0.28)
- New York (0.28)
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Applied AI (0.68)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence