Self-driving cars may soon be able to make moral and ethical decisions as humans do

#artificialintelligence

Can a self-driving vehicle be moral, act like humans do, or act like humans expect humans to? Contrary to previous thinking, a ground-breaking new study has found for the first time that human morality can be modelled meaning that machine based moral decisions are, in principle, possible. The research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, and published in Frontiers in Behavioral Neuroscience, used immersive virtual reality to allow the authors to study human behavior in simulated road traffic scenarios. The participants were asked to drive a car in a typical suburban neighborhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals, and humans and had to decide which was to be spared. The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior.


What moral code should your self-driving car follow?

Popular Science

Imagine you are driving down the street when two people -- one child and one adult -- step onto the road. Hitting one of them is unavoidable. You have a terrible choice. Now imagine that the car is driverless. Until now, no one believed that autonomous cars -- robotic vehicles that operate without human control-- could make moral and ethical choices, an issue that has been central to the ongoing debate about their use.


Scientists weigh moral dilemma for driverless cars: Who lives or dies?

The Japan Times

BOSTON – Imagine you're behind the wheel when your brakes fail. As you speed toward a crowded crosswalk, you're confronted with an impossible choice: veer right and mow down a large group of elderly people or veer left into a woman pushing a stroller. Now imagine you're riding in the back of a self-driving car. Researchers at Massachusetts Institute of Technology are asking people worldwide how they think a robot car should handle such life-or-death decisions. Their findings so far show people prefer a self-driving car to act in the greater good, sacrificing its passenger if it can save a crowd of pedestrians.


Solution for self-driving cars moral dilemma

#artificialintelligence

Every day we see advances in self-driving cars and related technology. Tesla is the company that is bringing that kind of technology to consumers every day and they are doing it quite well. But one of the most "problematic" thing about it is the moral dilemma, what should the system/car do in some edge cases. I was thinking about it and it might not be that difficult problem to solve. When we drive the cars without self-driving technology and get into situation with moral dilemma (for example which lives to save in a obvious crash) we make a decision in a blink of a second.


MIT game lets you decide which humans survive in self-driving car scenarios

Mashable

As self-driving cars slowly make their way onto our roads, how will developers help these autonomous vehicles make difficult decisions during accidents? A new MIT project illustrates just how difficult this will be by mixing gaming with deep moral questions. The Moral Machine presents you with a series of traffic scenarios in which a self-driving car must make a choice between two perilous options. Should you avoid hitting a group of five jaywalking pedestrians by hitting a concrete divider that will kill two of your passengers? If there's no other choice, do you drive into a group of young pedestrians, or elderly pedestrians?