Collaborating Authors

Self-driving cars may soon be able to make moral and ethical decisions as humans do


Can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to? Contrary to previous thinking, a ground-breaking new study has found for the first time that human morality can be modelled meaning that machine based moral decisions are, in principle, possible. The research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, and published in Frontiers in Behavioral Neuroscience, used immersive virtual reality to allow the authors to study human behavior in simulated road traffic scenarios. The participants were asked to drive a car in a typical suburban neighborhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared. The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior.

How People Judge Good From Bad


Moral evaluations occur quickly following heuristic-like intuitive processes without effortful deliberation. There are several competing explanations for this. The ADC-model predicts that moral judgment consists in concurrent evaluations of three different intuitive components: the character of a person (Agent-component, A); their actions (Deed-component, D); and the consequences brought about in the situation (Consequences-component, C). Thereby, it explains the intuitive appeal of precepts from three dominant moral theories (virtue ethics, deontology, and consequentialism), and flexible yet stable nature of moral judgment. Insistence on single-component explanations has led to many centuries of debate as to which moral precepts and theories best describe (or should guide) moral evaluation.

Can a Machine Be Taught To Learn Moral Reasoning?


Is it OK to kill time? Machines used to find this question difficult to answer, but a new study reveals that artificial intelligence can be programmed to judge "right" from "wrong". Published in Frontiers in Artificial Intelligence, scientists have used books and news articles to "teach" a machine moral reasoning. Further, by limiting teaching materials to texts from different eras and societies, subtle differences in moral values are revealed. As AI becomes more ingrained in our lives, this research will help machines to make the right choice when confronted with difficult decisions.

Towards Provably Moral AI Agents in Bottom-Up Learning Frameworks

AAAI Conferences

We examine moral decision making in autonomous systems as inspired by a central question posed by Rossi with respect to moral preferences: can AI systems based on statistical machine learning (which do not provide a natural way to explain or justify their decisions) be used for embedding morality into a machine in a way that allows us to prove that nothing morally wrong will happen? We argue for an evaluation which is held to the same standards as a human agent, removing the demand that ethical behavior is always achieved. We introduce four key meta-qualities desired for our moral standards, and then proceed to clarify how we can prove that an agent will correctly learn to perform moral actions given a set of samples within certain error bounds. Our group-dynamic approach enables us to demonstrate that the learned models converge to a common function to achieve stability. We further explain a valuable intrinsic consistency check made possible through the derivation of logical statements from the machine learning model. In all, this work proposes an approach for building ethical AI systems, coming from the perspective of artificial intelligence research, and sheds important light on understanding how much learning is required in order for an intelligent agent to behave morally with negligible error.

THE TECHNOLOGICAL CITIZEN » "Moral Machines" By Wendell Wallach and Collin Allen


In the 2004 film I, Robot, Will Smith's character Detective Spooner harbors a deep grudge for all things technological -- and turns out to be justified after a new generation of robots engage in a full out, summer blockbuster-style revolt against their human creators. Why was Detective Spooner such a Luddite–even before the Robots' vicious revolt? Much of his resentment stems from a car accident he endured in which a robot saved his life instead of a little girl's. The robot's decision haunts Smith's character throughout the movie; he feels the decision lacked emotion, and what one might call'humanity'. "I was the logical choice," he says. "(The robot) calculated that I had a 45% chance of survival. Sarah only had an 11% chance." He continues, dramatically, "But that was somebody's baby. A human being would've known that."