Goto

Collaborating Authors

 shariff


A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

The New Yorker

The first time Azim Shariff met Iyad Rahwan--the first real time, after communicating with him by phone and e-mail--was in a driverless car. It was November, 2012, and Rahwan, a thirty-four-year-old professor of computing and information science, was researching artificial intelligence at the Masdar Institute of Science and Technology, a university in Abu Dhabi. He was eager to explore how concepts within psychology--including social networks and collective reasoning--might inform machine learning, but there were few psychologists working in the U.A.E. Shariff, a thirty-one-year-old with wild hair and expressive eyebrows, was teaching psychology at New York University's campus in Abu Dhabi; he guesses that he was one of four research psychologists in the region at the time, an estimate that Rahwan told me "doesn't sound like an exaggeration." Rahwan cold-e-mailed Shariff and invited him to visit his research group.


A driverless car's computer could decide who lives and dies in a crash

AITopics Original Links

Amid all the buzz about vehicles that drive themselves, there are serious ethical questions facing regulators, manufacturers and the people who will ride in them. If faced with an unavoidable fatal crash, would the car be programmed to save its occupants at all costs or would it sacrifice its passengers for the greater good of saving a group of pedestrians? "There's this trade-off between the interests of the driver, or rather the passenger who buys the car, and the level of public acceptance versus public outrage," says Azim Shariff of the Culture and Morality Lab at the University of Oregon. Along with researchers from France and the Massachusetts Institute of Technology, Shariff set out to test public attitudes on the cold, hard decisions computer programs will have to make when lives are on the line. Azim Shariff, researcher at the Culture and Morality Lab at the University of Oregon, says some ethical questions should be answered before driverless cars fill the streets.


Ethics dilemmas may hold back autonomous cars: study

#artificialintelligence

Washington (AFP) - If it has to make a choice, will your autonomous car kill you or pedestrians on the street? The looming arrival of self-driving vehicles is likely to vastly reduce traffic fatalities, but also poses difficult moral dilemmas, researchers said in a study Thursday. Autonomous driving systems will require programmers to develop algorithms to make critical decisions that are based more on ethics than technology, according to the study published in the journal Science. "Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today," said the study by Jean-Francois Bonnefon of the Toulouse School of Economics, Azim Shariff of the University of Oregon and Iyad Rahwan of the Massachusetts Institute of Technology. "For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest -- let alone account for different cultures with various moral attitudes regarding life-life tradeoffs -- but public opinion and social pressure may very well shift as this conversation progresses."


Will your driverless car be willing to kill you to save the lives of others?

The Guardian

There's a chance it could bring the mood down. Having chosen your shiny new driverless car, only one question remains on the order form: whether your spangly, futuristic vehicle be willing to kill you? To buyers more accustomed to talking models and colours, the query might sound untoward. But for manufacturers of autonomous vehicles (AVs), the dilemma it poses is real. If a driverless car is about to hit a pedestrian, should it swerve and risk killing its occupants?


Ethical dilemma on four wheels: How to decide when your self-driving car should kill you

Los Angeles Times

Self-driving cars have a lot of learning to do before they can replace the roughly 250 million vehicles on U.S. roads today. They need to know how to navigate when their pre-programmed maps are out of date. They need to know how to visualize the lane dividers on a street that's covered with snow. And, if the situation arises, they'll need to know whether it's better to mow down a group of pedestrians or spare their lives by steering off the road, killing all passengers onboard. Once self-driving cars are logging serious miles, they're sure to find themselves in situations where an accident is unavoidable.