How People Judge Good From Bad

#artificialintelligence

Moral evaluations occur quickly following heuristic-like intuitive processes without effortful deliberation. There are several competing explanations for this. The ADC-model predicts that moral judgment consists in concurrent evaluations of three different intuitive components: the character of a person (Agent-component, A); their actions (Deed-component, D); and the consequences brought about in the situation (Consequences-component, C). Thereby, it explains the intuitive appeal of precepts from three dominant moral theories (virtue ethics, deontology, and consequentialism), and flexible yet stable nature of moral judgment. Insistence on single-component explanations has led to many centuries of debate as to which moral precepts and theories best describe (or should guide) moral evaluation.


How to Build a Moral Robot

IEEE Spectrum Robotics

Whether it's in our cars, our hospitals or our homes, we'll soon depend upon robots to make judgement calls in which human lives are at stake. That's why a team of researchers is attempting to model moral reasoning in a robot. In order to pull it off, they'll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices? How can we equip robots with the communication skills to explain their choices in way that we can understand? And would we even want robots to make the same decisions we'd expect humans to make?


A Minimalist Model of the Artificial Autonomous Moral Agent (AAMA)

AAAI Conferences

This paper proposes a model for an artificial autonomous moral agent (AAMA), which is parsimonious in its ontology and minimal in its ethical assumptions. Starting from a set of moral data, this AAMA is able to learn and develop a form of moral competency. It resembles an “optimizing predictive mind,” which uses moral data (describing typical behavior of humans) and a set of dispositional traits to learn how to classify different actions (given a given background knowledge) as morally right, wrong, or neutral. When confronted with a new situation, this AAMA is supposedly able to predict a behavior consistent with the training set. This paper argues that a promising computational tool that fits our model is “neuroevolution,” i.e. evolving artificial neural networks.


Towards Provably Moral AI Agents in Bottom-Up Learning Frameworks

AAAI Conferences

We examine moral decision making in autonomous systems as inspired by a central question posed by Rossi with respect to moral preferences: can AI systems based on statistical machine learning (which do not provide a natural way to explain or justify their decisions) be used for embedding morality into a machine in a way that allows us to prove that nothing morally wrong will happen? We argue for an evaluation which is held to the same standards as a human agent, removing the demand that ethical behavior is always achieved. We introduce four key meta-qualities desired for our moral standards, and then proceed to clarify how we can prove that an agent will correctly learn to perform moral actions given a set of samples within certain error bounds. Our group-dynamic approach enables us to demonstrate that the learned models converge to a common function to achieve stability. We further explain a valuable intrinsic consistency check made possible through the derivation of logical statements from the machine learning model. In all, this work proposes an approach for building ethical AI systems, coming from the perspective of artificial intelligence research, and sheds important light on understanding how much learning is required in order for an intelligent agent to behave morally with negligible error.


Test your ethics around killer AI cars with this Moral Machine game

#artificialintelligence

Should a self-driving car full of old folks crash to avoid puppies in the cross-walk? Is it OK to run over two criminals if you save one doctor? Whose lives are worth more, seven-year-olds or senior citizens? This new game called the "Moral Machine" from MIT's researchers lets you make the calls in the famous "trolley problem" and see analytics about your ethics. Thinking about these tough questions is more important than ever since engineers are coding this type of decision making into autonomous vehicles right now.