Developing a more human-like response is an increasing feature of AI


When an Uber autonomous test car killed pedestrian Elaine Herzberg in Tempe, Arizona, in March 2018, it sent alarm bells around the world of artificial intelligence (AI) and machine learning. Walking her bicycle, Herzberg had strayed on to the road, resulting in a fatal collision with the vehicle. While there were other contributory factors in the accident, the incident highlighted a key flaw in the algorithm powering the car. It was not trained to cope with jay-walkers nor could it recognise whether it was dealing with a bicycle or a pedestrian. Confused, it ultimately failed to default quickly to the safety option of slowing the vehicle and potentially saving Herzberg's life.

Duplicate Docs Excel Report

None found

Similar Docs  Excel Report  more

None found