Goto

Collaborating Authors

The Consequences for Human Beings of Creating Ethical Robots

AAAI Conferences

We consider the consequences for human beings of attempting to create ethical robots, a goal of the new field of AI that has been called Machine Ethics. We argue that the concerns that have been raised are either unfounded, or can be minimized, and that many benefits for human beings can come from this research. In particular, working on machine ethics will force us to clarify what it means to behave ethically and thus advance the study of Ethical Theory. Also, this research will help to ensure ethically acceptable behavior from artificially intelligent agents, permitting a wider range of applications that benefit human beings. Finally, it is possible that this research could lead to the creation of ideal ethical decision-makers who might be able to teach us all how to behave more ethically. A new field of Artificial Intelligence is emerging that has been called Machine Ethics.


THE TECHNOLOGICAL CITIZEN » "Moral Machines" By Wendell Wallach and Collin Allen

#artificialintelligence

In the 2004 film I, Robot, Will Smith's character Detective Spooner harbors a deep grudge for all things technological -- and turns out to be justified after a new generation of robots engage in a full out, summer blockbuster-style revolt against their human creators. Why was Detective Spooner such a Luddite–even before the Robots' vicious revolt? Much of his resentment stems from a car accident he endured in which a robot saved his life instead of a little girl's. The robot's decision haunts Smith's character throughout the movie; he feels the decision lacked emotion, and what one might call'humanity'. "I was the logical choice," he says. "(The robot) calculated that I had a 45% chance of survival. Sarah only had an 11% chance." He continues, dramatically, "But that was somebody's baby. A human being would've known that."


FS05-06-007.pdf

AAAI Conferences

Utilizing the film I, Robot as a springboard, I here consider the feasibility of robot utilitarians, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interaction.


The Growing Role of Ethics in AI Development – Sebastian Jones – Medium

#artificialintelligence

In some strange paradox, today's ethics deals with arguably the world, and societies most important questions, which, nevertheless are accepted to never reach a conclusion. Ethical dilemmas are essentially there to provide a moral dialogue that will thus indirectly influence law and societal values. Although as we begin developing artificial intelligence and impregnating these artificial creations with moral compasses, we will have to make conclusive decisions as to the resolutions of different ethical dilemmas. For example; the trolley problem, in which one is presented with the dilemma, that if a train is coming and 3 people are on the track, one can either let the three die, or push someone off the bridge, thus killing him and saving others. While this is largely mental chess today, aimed at amusing friends at a dinner party, and challenging world class academics alike, it is largely irrelevant as one would never run into an issue such as this, and if they did they would not have time to consider the ethical consequences of different actions.


2052

AI Magazine

The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics--which has traditionally focused on ethical issues surrounding humans' use of machines--machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior. We need to make a distinction between what James Moor has called an "implicit ethical agent" and an "explicit ethical agent" (Moor 2006).