AI for self-driving cars doesn't account for crime - Futurity
You are free to share this article under the Attribution 4.0 International license. Existing approaches to artificial intelligence for self-driving cars don't account for the fact that people might try to use the autonomous vehicles to do something bad, researchers report. For example, let's say that there is an autonomous vehicle with no passengers and it's about to crash into a car containing five people. It can avoid the collision by swerving out of the road, but it would then hit a pedestrian. "…the simplistic approach currently being used to address ethical considerations in AI and autonomous vehicles doesn't account for malicious intent. Most discussions of ethics in this scenario focus on whether the autonomous vehicle's AI should be selfish (protecting the vehicle and its cargo) or utilitarian (choosing the action that harms the fewest people). But that either/or approach to ethics can raise problems of its own. "Current approaches to ethics and autonomous vehicles are a dangerous oversimplification--moral judgment is more complex than that," says Veljko Dubljević, an assistant professor in the Science, Technology & Society (STS) program at North Carolina State University and author of a paper outlining this problem and a possible path forward. "For example, what if the five people in the car are terrorists?
Jul-19-2020, 07:55:27 GMT
- Country:
- North America > United States > North Carolina (0.26)
- Industry:
- Technology: