Google's 'Duplex' Raises Ethical Questions


Google has introduced a future feature called "Duplex." It can make outgoing calls to schedule appointments and it has all the characteristics of real human speech. NPR's Mary Louise Kelly speaks with Shane Mac, CEO of Assist about the technology and the ethical questions it raises.

The Consequences for Human Beings of Creating Ethical Robots

AAAI Conferences

We consider the consequences for human beings of attempting to create ethical robots, a goal of the new field of AI that has been called Machine Ethics. We argue that the concerns that have been raised are either unfounded, or can be minimized, and that many benefits for human beings can come from this research. In particular, working on machine ethics will force us to clarify what it means to behave ethically and thus advance the study of Ethical Theory. Also, this research will help to ensure ethically acceptable behavior from artificially intelligent agents, permitting a wider range of applications that benefit human beings. Finally, it is possible that this research could lead to the creation of ideal ethical decision-makers who might be able to teach us all how to behave more ethically. A new field of Artificial Intelligence is emerging that has been called Machine Ethics.

Can AI be taught to be nice?


We are rapidly approaching the day when an autonomous artificial intelligence may have to make ethical decisions of great magnitude without human supervision. The question that we must answer is how it should act when life is on the line. Helping us make our decision is philosopher James H. Moor, one of the first philosophers to make significant inroads into computer ethics. In his 2009 essay Four Kinds of Ethical Robots, he examines the possible ethical responsibilities machines could have and how we ought to think about it. Each group has different ethical abilities that we need to account for when designing and responding to them.

Machine Ethics: Creating an Ethical Intelligent Agent

AI Magazine

The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics -- which has traditionally focused on ethical issues surrounding humans' use of machines -- machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.