Collaborating Authors

Participatory Approaches to Algorithmic Responsibility


A framework on responsible algorithmic systems would be incomplete (and not very responsible) without provisioning for citizen participation. The premise is simple: in deliberative democracies, citizens should have agency over how their data is used, and have a say in policies that affect their well-being. This policy influence should also extend to algorithmic decision-making systems (ADS) deployed by private entities. High-risk ADS are algorithmic decision-making systems that directly or indirectly impact benefits, punishments, or opportunities individuals or groups can receive. The risk here being that individuals or groups are harmed by the output of the decision-making system, resulting in incorrect or unfair outcomes.

Relational Artificial Intelligence Artificial Intelligence

The impact of Artificial Intelligence does not depend only on fundamental research and technological developments, but for a large part on how these systems are introduced into society and used in everyday situations. Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective. A rational approach to AI, where computational algorithms drive decision making independent of human intervention, insights and emotions, has shown to result in bias and exclusion, laying bare societal vulnerabilities and insecurities. A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI. A relational approach to AI recognises that objective and rational reasoning cannot does not always result in the 'right' way to proceed because what is 'right' depends on the dynamics of the situation in which the decision is taken, and that rather than solving ethical problems the focus of design and use of AI must be on asking the ethical question. In this position paper, I start with a general discussion of current conceptualisations of AI followed by an overview of existing approaches to governance and responsible development and use of AI. Then, I reflect over what should be the bases of a social paradigm for AI and how this should be embedded in relational, feminist and non-Western philosophies, in particular the Ubuntu philosophy.

The Big Data Challenge - Shaping AI: Recommendations By Virginia Dignum - Big Data Value


Virginia Dignum is professor at the Department of Computing Science at Umea University in Sweden. She is also member of, among others, the European Commission High Level Expert Group on AI, the World Economic Forum Council on AI, the IEEE Global Initiative on Ethically Aligned Design of Autonomous and Intelligent Systems and the European Global Forum on AI (AI4People). She has written a series of blogs for the website, on which the content of this blogpost is based. You can find the hyperlinks to her blogs at the bottom of this page. Facing the challenge of bringing together many views from different disciplines on what AI exactly entails, Virginia's definition of AI offers an overarching perspective: "(…) AI is the discipline of developing computer systems that are able of perceiving their environment, with the ability to deliberate how to best act in order to achieve its own goals, while taking into account that the environment contains other actors similar to itself."

Teaching AI, Ethics, Law and Policy Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.

Andrew Pery, Ethics Evangelist, ABBYY – Interview Series


Andrew Pery, is the Ethics Evangelist at ABBYY, a digital intelligence company. They empower organizations to access the valuable, yet often hard to attain, insight into their operations that enables true business transformation. ABBYY recently released a Global Initiative Promoting the Development of Trustworthy Artificial Intelligence. We decided to ask Andrew questions regarding ethics in AI, abuses of AI, and what the AI industry can do about these concerns moving forward. What is it that initially instigated your interest in AI ethics? What initially sparked my interest in AI ethics was a deep interest in the intersection of law and AI technology.