Robots learn socially accepted behavior by reading and understanding children's books, particularly stories about chivalry. Researchers developed a technology called "Quixote" that can teach robots how to align their goals with proper human behavior in social settings. The increasing growth of artificial intelligence has come with fear that these robots could be a threat to humanity. To lessen this anxiety, a team of researchers developed a method that will train AI how to behave in social settings. The new technology is called "Quixote" and it teaches robots to read children's stories, understand acceptable social behavior in societies and learn standard event sequences.
Despite the design community's tendency to create fad cycles (remember when we spent a year arguing about skeuomorphism?), I'll take this as a positive sign for the maturity of our practice. There are signs that a more conscientious, responsible approach is becoming a stable pillar of our professional identity. However, the challenge of "design ethics" is that once you start thinking about it, you start to notice it everywhere. And it's not just about the business models, it's about the technical and economic structures that enable them. That's an awful lot of context to keep in mind at once.
Given a swell of dire warnings about the future of Artificial Intelligence over the last few years, the field of AI ethics has become a hive of activity. These warnings come from a variety of experts such as Oxford University's Nick Bostrom, but also from more public figures such as Elon Musk and the late Stephen Hawking. The picture they paint is bleak. In response, many have dreamed up sets of principles to guide AI researchers and help them negotiate the maze of human morality and ethics. Now, a paper in Nature Machine Intelligence throws a spanner in the works by claiming that such high principles, while laudable, will not give us the ethical AI society we need.
The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics -- which has traditionally focused on ethical issues surrounding humans' use of machines -- machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.