Collaborating Authors

Towards Contrastive Explanations for Comparing the Ethics of Plans Artificial Intelligence

We are interested in models where actions are deterministic, This can be done through contrastive explanations [5], durationless, and can be performed one at a time. We also which focus on explaining the difference between a factual assume a known initial state and goal. Traditionally, ethical event A and a contrasting event B. To produce these explanations, principles of single decisions are evaluated [1]. In the context one must reason about the hypothetical alternative B, of AI Planning this means analysing a massive number of which likely means constructing an alternative plan where B isolated decisions that may not make sense without the is included rather than A. The original model is constrained context in which they are being made. Therefore, it is to produce a hypothetical planning model (HModel). The preferable to evaluate the ethical contents of a plan as a solution to the HModel is the hypothetical plan (HPlan) that whole. Lindner et al. [2] describe an approach to judging contains the contrast case expected by the user.

Understanding artificial intelligence ethics and safety Artificial Intelligence

A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. This guide, written for department and delivery leads in the UK public sector and adopted by the British Government in its publication, 'Using AI in the Public Sector,' identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. It stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. It also highlights the need for algorithmically supported outcomes to be interpretable by their users and made understandable to decision subjects in clear, non-technical, and accessible ways. Finally, it builds out a vision of human-centred and context-sensitive implementation that gives a central role to communication, evidence-based reasoning, situational awareness, and moral justifiability.

AI Policy Matters – AI data, facial recognition, and more


AI Policy Matters is a regular column in the ACM SIGAI AI Matters newsletter featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog. Confusion in the popular media about terms such as algorithm and what constitutes AI technology cause critical misunderstandings among the public and policymakers. More importantly, the role of data is often ignored in ethical and operational considerations. Even if AI systems are perfectly built, low quality and biased data cause unintentional and even intentional hazards. A generative pre-trained transformer GPT-3 is currently in the news.

When AI, big data, ethics and human rights converge News CORDIS European Commission


"Artificial intelligence and big data analytics bring a variety of benefits to society, but at the same time have the potential to disrupt society, ethical values and human rights, and life as we know it," says Bernd Stahl, Director of the Centre for Computing and Social Responsibility, De Montfort University and coordinator of the SHERPA project. "The EU-funded SHERPA project examines these issues and is working to enhance the responsible development of such technologies." On 2-3 May 2018, representatives of 11 different organisations (from academia, industry, civil society, standards bodies and ethics committees) from 6 European countries met in Brussels to launch the EU-funded SHERPA project which will examine how smart information systems (SIS) (i.e. the combination of artificial intelligence (AI) and big data analytics) impact ethics and human rights. In dialogue with stakeholders, the project will develop novel ways to understand and address ethical and human rights challenges to find desirable and sustainable solutions that can benefit both innovators and society. Researchers and innovators want to experiment with AI and big data analytics and devise new solutions that avoid ethical and regulatory barriers.

A Formalization of Kant's Second Formulation of the Categorical Imperative Artificial Intelligence

We present a formalization and computational implementation of the second formulation of Kant's categorical imperative. This ethical principle requires an agent to never treat someone merely as a means but always also as an end. Here we interpret this principle in terms of how persons are causally affected by actions. We introduce Kantian causal agency models in which moral patients, actions, goals, and causal influence are represented, and we show how to formalize several readings of Kant's categorical imperative that correspond to Kant's concept of strict and wide duties towards oneself and others. Stricter versions handle cases where an action directly causally affects oneself or others, whereas the wide version maximizes the number of persons being treated as an end. We discuss limitations of our formalization by pointing to one of Kant's cases that the machinery cannot handle in a satisfying way.