Self-driving cars are already deciding who to kill

#artificialintelligence 

Autonomous vehicles are already making profound choices about whose lives matter, according to experts, so we might want to pay attention. "Every time the car makes a complex manoeuvre, it is implicitly making trade-off in terms of risks to different parties," Iyad Rahwan, an MIT cognitive scientist, wrote in an email. The most well-known issues in AV ethics are trolly problems -- moral questions dating back to the era of trollies that ask whose lives should be sacrificed in an unavoidable crash. For instance, if a person falls onto the road in front of a fast-moving AV, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian, what should it do? Rahwan and colleagues have studied what humans consider the moral action in no-win scenarios (you can judge your own cases at their crowd-sourced project, Moral Machine).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found