Existing approaches to artificial intelligence for self-driving cars don't account for the fact that people might try to use the autonomous vehicles to do something bad, researchers report. For example, let's say that there is an autonomous vehicle with no passengers and it's about to crash into a car containing five people. It can avoid the collision by swerving out of the road, but it would then hit a pedestrian. Most discussions of ethics in this scenario focus on whether the autonomous vehicle's AI should be selfish (protecting the vehicle and its cargo) or utilitarian (choosing the action that harms the fewest people). But that either/or approach to ethics can raise problems of its own.
Jul-11-2020, 10:20:52 GMT