Goto

Collaborating Authors

 make ethical decision


AI ethical decision making: Is society ready?

#artificialintelligence

With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions of their behalf. A research team has studied how humans react to the introduction of AI decision making. Specifically, they explored the question, 'is society ready for AI ethical decision making?' by studying human interaction with autonomous cars.


Theresa May warns that robots must be taught morals

Daily Mail - Science & tech

Robots must be taught morals so they can be trusted to make potentially life and death decisions, Theresa May will say. Artificial intelligence (AI) is increasingly being used to drive cars, diagnose patients and decide on prison sentences around the world. The Prime Minster will warn that as machines swoop in to carry out more jobs they must also be taught how to make ethical decisions. The PM - who was mockingly dubbed the Maybot during the election campaign - will make the warning in a major speech on AI this Thursday in Davos, a summit of political and business leaders. Theresa May (pictured in No10 last night) will warn that as machines swoop in to carry out more jobs they must also be taught how to make ethical decisions.


What would the average human do?

#artificialintelligence

Last year, researchers at MIT set up a curious website called the Moral Machine, which peppered visitors with casually gruesome questions about what an autonomous vehicle should do if its brakes failed as it sped toward pedestrians in a crosswalk: whether it should mow down three joggers to spare two children, for instance, or veer into a concrete barrier to save a pedestrian who is elderly, or pregnant, or homeless, or a criminal. In each grisly permutation, the Moral Machine invited visitors to cast a vote about who the vehicle should kill. The project is a morbid riff on the "trolley problem," a thought experiment that forces participants to choose between letting a runaway train kill five people or diverting its path to kill one person who otherwise wouldn't die. But the Moral Machine gave the riddle a contemporary twist that got picked up by the New York Times, The Guardian and Scientific American and eventually collected some 18 million votes from 1.3 million would-be executioners. That unique cache of data about the ethical gut feelings of random people on the internet intrigued Ariel Procaccia, an assistant professor in the computer science department at Carnegie Mellon University, and he struck up a partnership with Iyad Rahwan, one of the MIT researchers behind the Moral Machine, as well as a team of other scientists at both institutions.


Can we trust robots to make ethical decisions?

#artificialintelligence

Once the preserve of science-fiction movies, artificial intelligence is one of the hottest areas of research right now. While the idea behind AI is to make our lives easier, there is concern that as the technology becomes more advanced, we may be heading for disaster. How can we be sure, for instance, that artificially intelligent robots will make ethical choices? There are plenty of instances of artificial intelligence gone wrong. The case of the rude and racist chatbot** Chatbot Tay, Microsoft's AI millennial chatbot, was meant to be a friendly chatbot that would sound like a teenage girl and engage in light conversation with her followers on Twitter.