Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards Artificial Intelligence

ABSTRACT Vision-based navigation of modern autonomous vehicles primarily depends on Deep Neural Network (DNN) based systems in which the controller obtains input from sensors/detectors such as cameras, and produces an output such as a steering wheel angle to navigate the vehicle safely in roadway traffic. Typically, these DNN-based systems are trained through supervised and/or transfer learning; however, recent studies show that these systems can be compromised by perturbation or adversarial input features on the trained DNN-based models. Similarly, this perturbation can be introduced into the autonomous vehicle DNN-based system by roadway hazards such as debris and roadblocks. In this study, we first introduce a roadway hazardous environment (both intentional and unintentional) that can compromise the DNN-based system of an autonomous vehicle, producing an incorrect vehicle navigational output such as a steering wheel angle, which can cause crashes resulting in fatality and injury. Then, we develop an approach based on object detection and semantic segmentation to mitigate the adverse effect of this hazardous environment, one that helps the autonomous vehicle to navigate safely around such hazards. This study finds the DNN-based model with hazardous object detection, and semantic segmentation improves the ability of an autonomous vehicle to avoid potential crashes by 21% compared to the traditional DNN-based autonomous driving system.

Do you see what AI sees? Study finds that humans can think like computers


Even powerful computers, like those that guide self-driving cars, can be tricked into mistaking random scribbles for trains, fences, or school buses. It was commonly believed that people couldn't see how those images trip up computers, but in a new study, Johns Hopkins University researchers show most people actually can. The findings suggest modern computers may not be as different from humans as we think, demonstrating how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. The research appears today in the journal Nature Communications. "Most of the time, research in our field is about getting computers to think like people," says senior author Chaz Firestone, an assistant professor in Johns Hopkins' Department of Psychological and Brain Sciences.

Study finds catch-22 ethical dilemma at heart of self-driving car safety

The Guardian

In catch-22 traffic emergencies where there are only two deadly options, people generally want a self-driving vehicle to, for example, avoid a group of pedestrians and instead slam itself and its passengers into a wall, a new study says. But they would rather not be travelling in a car designed to do that. The findings of the study, released on Thursday in the journal Science, highlight just how difficult it may be for auto companies to market those cars to a public that tends to contradict itself. Related: Statistically, self-driving cars are about to kill someone. "People want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs," Iyad Rahwan, a co-author of the study and a professor at MIT, said.

Study finds AI systems exhibit human-like prejudices


Whether we like to believe it or not, scientific research has clearly shown that we all have deeply ingrained biases, which create stereotypes in our mind that can often lead to unfair treatment of others. As artificial intelligence (AI) plays an increasingly important role in our lives as decision makers in self-driving cars, doctor offices, and surveillance, it becomes critical to ask whether AI exhibits the same inbuilt biases as humans. According to a new study conducted by a team of researchers at Princeton, many AI systems do in fact exhibit racial and gender biases that could prove problematic in some cases. One well established way for psychologists to detect biases is the Implicit Association Test. Introduced into the scientific literature in 1998 and widely used today in clinical, cognitive, and developmental research, the test is designed to measure the strength of a person's automatic association between concepts or objects in memory.