Can we trust robots to make moral decisions?

#artificialintelligence

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, turned into a Nazi-loving racist after less than 24 hours on Twitter. "Repeat after me, Hitler did nothing wrong," she said, after interacting with various trolls. "Bush did 9/11 and Hitler would have done a better job than the monkey we have got now." Of course, Tay wasn't designed to be explicitly moral.


Machine Ethics: Creating an Ethical Intelligent Agent

AI Magazine

The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics -- which has traditionally focused on ethical issues surrounding humans' use of machines -- machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.


Responses to a Critique of Artificial Moral Agents

arXiv.org Artificial Intelligence

The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.


The Consequences for Human Beings of Creating Ethical Robots

AAAI Conferences

We consider the consequences for human beings of attempting to create ethical robots, a goal of the new field of AI that has been called Machine Ethics. We argue that the concerns that have been raised are either unfounded, or can be minimized, and that many benefits for human beings can come from this research. In particular, working on machine ethics will force us to clarify what it means to behave ethically and thus advance the study of Ethical Theory. Also, this research will help to ensure ethically acceptable behavior from artificially intelligent agents, permitting a wider range of applications that benefit human beings. Finally, it is possible that this research could lead to the creation of ideal ethical decision-makers who might be able to teach us all how to behave more ethically. A new field of Artificial Intelligence is emerging that has been called Machine Ethics.


Developing a General, Interactive Approach to Codifying Ethical Principles

AAAI Conferences

Building on our previous achievements in machine ethics (Anderson et al. 2006a-b, 2007, 2008), we are developing and implementing a general interactive approach to analyzing ethical dilemmas with the goal to apply it toward the end of codifying the ethical principles that will help resolve ethical dilemmas that intelligent systems will encounter in their interactions with human beings. Making a minimal epistemological commitment that there is at least one ethical duty and at least two possible actions that could be performed, the general system will: 1) incrementally construct, through an interactive exchange with experts in ethics, a representation scheme needed to handle the dilemmas with which it is presented, and 2) discover principles implicit in the judgments of these ethicists in particular cases that lead to their resolution. The system will commit only to the assumption that any ethically relevant features of a dilemma can be represented as the degree of satisfaction or violation of one or more duties that an agent must take into account to determine which of the actions that are possible in that dilemma is ethically preferable.