The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics--which has traditionally focused on ethical issues surrounding humans' use of machines--machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior. We need to make a distinction between what James Moor has called an "implicit ethical agent" and an "explicit ethical agent" (Moor 2006).
Building on our previous achievements in machine ethics (Anderson et al. 2006a-b, 2007, 2008), we are developing and implementing a general interactive approach to analyzing ethical dilemmas with the goal to apply it toward the end of codifying the ethical principles that will help resolve ethical dilemmas that intelligent systems will encounter in their interactions with human beings. Making a minimal epistemological commitment that there is at least one ethical duty and at least two possible actions that could be performed, the general system will: 1) incrementally construct, through an interactive exchange with experts in ethics, a representation scheme needed to handle the dilemmas with which it is presented, and 2) discover principles implicit in the judgments of these ethicists in particular cases that lead to their resolution. The system will commit only to the assumption that any ethically relevant features of a dilemma can be represented as the degree of satisfaction or violation of one or more duties that an agent must take into account to determine which of the actions that are possible in that dilemma is ethically preferable.
Having discovered a decision principle for a well-known prima facie duty theory in biomedical ethics to resolve particular cases of a common type of ethical dilemma, we developed three applications: a medical ethics advisor system, a medication reminder system and an instantiation of this system in a Nao robot. We are now developing a general, automated method for generating from scratch the ethics needed for a machine to function in a particular domain, without making the assumptions used in our prototype systems.
A paradigm of case-supported principle-based behavior (CPB) is proposed to help ensure ethical behavior of autonomous machines. We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior. The requirements, methods, implementation, and evaluation components of the CPB paradigm are detailed.
We consider the consequences for human beings of attempting to create ethical robots, a goal of the new field of AI that has been called Machine Ethics. We argue that the concerns that have been raised are either unfounded, or can be minimized, and that many benefits for human beings can come from this research. In particular, working on machine ethics will force us to clarify what it means to behave ethically and thus advance the study of Ethical Theory. Also, this research will help to ensure ethically acceptable behavior from artificially intelligent agents, permitting a wider range of applications that benefit human beings. Finally, it is possible that this research could lead to the creation of ideal ethical decision-makers who might be able to teach us all how to behave more ethically. A new field of Artificial Intelligence is emerging that has been called Machine Ethics.