Towards Moral Autonomous Systems

arXiv.org Artificial Intelligence

Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.


Machine ethics: The robot's dilemma

AITopics Original Links

The fully programmable Nao robot has been used to experiment with machine ethics. In his 1942 short story'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics -- engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance.


The Consequences for Human Beings of Creating Ethical Robots

AAAI Conferences

We consider the consequences for human beings of attempting to create ethical robots, a goal of the new field of AI that has been called Machine Ethics. We argue that the concerns that have been raised are either unfounded, or can be minimized, and that many benefits for human beings can come from this research. In particular, working on machine ethics will force us to clarify what it means to behave ethically and thus advance the study of Ethical Theory. Also, this research will help to ensure ethically acceptable behavior from artificially intelligent agents, permitting a wider range of applications that benefit human beings. Finally, it is possible that this research could lead to the creation of ideal ethical decision-makers who might be able to teach us all how to behave more ethically. A new field of Artificial Intelligence is emerging that has been called Machine Ethics.


Robots should be equipped with an ethical black box

Daily Mail

Science fiction is littered with nightmare visions of robots turning against their creators, but a proposed device could help us keep track of their decision making. Intelligent machines of the future could be equipped with recorders, similar to the black box on aircraft, that would capture their ethical behaviour. This'ethical black box' would allow experts to analyse what went wrong in the event of an accident or malfunction. While it may not be enough to stop a robot from harming a human, albeit accidentally, it could prove useful in help to avoid repeat incidents. Science fiction is littered with nightmare visions of robots turning against their creators, like the Terminator (pictured).