Goto

Collaborating Authors

Can we trust robots to make moral decisions?

#artificialintelligence

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, turned into a Nazi-loving racist after less than 24 hours on Twitter. "Repeat after me, Hitler did nothing wrong," she said, after interacting with various trolls. "Bush did 9/11 and Hitler would have done a better job than the monkey we have got now." Of course, Tay wasn't designed to be explicitly moral.


How to Build a Moral Robot

#artificialintelligence

Moral reasoning is being modeled for robots, since they are expected to play an increasingly critical role in making judgment calls where human lives are at stake. With robots expected to play an increasingly critical role in making judgment calls where human lives are at stake, it is imperative to model moral reasoning in machines. "Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don't understand on the human side how humans represent and reason if possible with moral norms," notes Tufts University researcher Matthias Scheutz. By gleaning data from enough different situations, Malle believes he will be able to build an approximate map of a human norm network that could be incorporated into a robot so it could summon the correct moral framework for whatever situation is at hand.


Can a Machine Be Taught To Learn Moral Reasoning?

#artificialintelligence

Is it OK to kill time? Machines used to find this question difficult to answer, but a new study reveals that artificial intelligence can be programmed to judge "right" from "wrong". Published in Frontiers in Artificial Intelligence, scientists have used books and news articles to "teach" a machine moral reasoning. Further, by limiting teaching materials to texts from different eras and societies, subtle differences in moral values are revealed. As AI becomes more ingrained in our lives, this research will help machines to make the right choice when confronted with difficult decisions.


How to Build a Moral Robot

IEEE Spectrum Robotics

Whether it's in our cars, our hospitals or our homes, we'll soon depend upon robots to make judgement calls in which human lives are at stake. That's why a team of researchers is attempting to model moral reasoning in a robot. In order to pull it off, they'll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices? How can we equip robots with the communication skills to explain their choices in way that we can understand? And would we even want robots to make the same decisions we'd expect humans to make?


Self-driving cars may soon be able to make moral and ethical decisions as humans do

#artificialintelligence

Can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to? Contrary to previous thinking, a ground-breaking new study has found for the first time that human morality can be modelled meaning that machine based moral decisions are, in principle, possible. The research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, and published in Frontiers in Behavioral Neuroscience, used immersive virtual reality to allow the authors to study human behavior in simulated road traffic scenarios. The participants were asked to drive a car in a typical suburban neighborhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared. The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior.