John Hooker Carnegie Mellon University Revised December 2018 Abstract While many see the prospect of autonomous machines as threatening, autonomy may be exactly what we want in a superintelligent machine. There is a sense of autonomy, deeply rooted in the ethical literature, in which an autonomous machine is necessarily an ethical one. Development of the theory underlying this idea not only reveals the advantages of autonomy, but it sheds light on a number of issues in the ethics of artificial intelligence. It helps us to understand what sort of obligations we owe to machines, and what obligations they owe to us. It clears up the issue of assigning responsibility to machines or their creators. More generally, a concept of autonomy that is adequate to both human and artificial intelligence can lead to a more adequate ethical theory for both. There is a good deal of trepidation at the prospect of autonomous machines. They may wreak havoc and even turn on their creators. We fear losing control of machines that have minds of their own, particularly if they are intelligent enough to outwit us. There is talk of a "singularity" in technological development, at which point machines will start designing themselves and create superintelligence (Vinge 1993, Bostrom 2014). Do we want such machines to be autonomous? There is a sense of autonomy, deeply rooted in the ethics literature, in which this may be exactly what we want. The attraction of an autonomous machine, in this sense, is that it is an ethical machine. The aim of this paper is to explain why this is so, and to show that the associated theory can shed light on a number of issues in the ethics of artificial intelligence (AI).
How obliged can we be to AI, and how much danger does it pose us? A surprising proportion of our society holds exaggerated fears or hopes for AI, such as the fear of robot world conquest, or the hope that AI will indefinitely perpetuate our culture. These misapprehensions are symptomatic of a larger problem—a confusion about the nature and origins of ethics and its role in society. While AI technologies do pose promises and threats, these are not qualitatively different from those posed by other artifacts of our culture which are largely ignored: from factories to advertising, weapons to political systems. Ethical systems are based on notions of identity, and the exaggerated hopes and fears of AI derive from our cultures having not yet accommodated the fact that language and reasoning are no longer uniquely human. The experience of AI may improve our ethical intuitions and self-understanding, potentially helping our societies make better-informed decisions on serious ethical dilemmas.
The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.
This paper proposes a model for an artificial autonomous moral agent (AAMA), which is parsimonious in its ontology and minimal in its ethical assumptions. Starting from a set of moral data, this AAMA is able to learn and develop a form of moral competency. It resembles an “optimizing predictive mind,” which uses moral data (describing typical behavior of humans) and a set of dispositional traits to learn how to classify different actions (given a given background knowledge) as morally right, wrong, or neutral. When confronted with a new situation, this AAMA is supposedly able to predict a behavior consistent with the training set. This paper argues that a promising computational tool that fits our model is “neuroevolution,” i.e. evolving artificial neural networks.