Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards

arXiv.org Artificial Intelligence

ABSTRACT Vision-based navigation of modern autonomous vehicles primarily depends on Deep Neural Network (DNN) based systems in which the controller obtains input from sensors/detectors such as cameras, and produces an output such as a steering wheel angle to navigate the vehicle safely in roadway traffic. Typically, these DNN-based systems are trained through supervised and/or transfer learning; however, recent studies show that these systems can be compromised by perturbation or adversarial input features on the trained DNN-based models. Similarly, this perturbation can be introduced into the autonomous vehicle DNN-based system by roadway hazards such as debris and roadblocks. In this study, we first introduce a roadway hazardous environment (both intentional and unintentional) that can compromise the DNN-based system of an autonomous vehicle, producing an incorrect vehicle navigational output such as a steering wheel angle, which can cause crashes resulting in fatality and injury. Then, we develop an approach based on object detection and semantic segmentation to mitigate the adverse effect of this hazardous environment, one that helps the autonomous vehicle to navigate safely around such hazards. This study finds the DNN-based model with hazardous object detection, and semantic segmentation improves the ability of an autonomous vehicle to avoid potential crashes by 21% compared to the traditional DNN-based autonomous driving system.


Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

arXiv.org Machine Learning

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a "physically adversarial Stop Sign" can be synthesized such that the autonomous driving cars will misrecognize it as others (e.g., a speed limit sign). However, these image-based adversarial examples cannot easily alter 3D scans such as widely equipped LiDAR or radar on autonomous vehicles. In this paper, we reveal the potential vulnerabilities of LiDAR-based autonomous driving detection systems, by proposing an optimization based approach LiDAR-Adv to generate real-world adversarial objects that can evade the LiDAR-based detection systems under various conditions. We first explore the vulnerabilities of LiDAR using an evolutionbased blackbox attack algorithm, and then propose a strong attack strategy, using our gradient-based approach LiDAR-Adv. We test the generated adversarial objects on the Baidu Apollo autonomous driving platform and show that such physical systems are indeed vulnerable to the proposed attacks. We 3D-print our adversarial objects and perform physical experiments with LiDAR equipped cars to illustrate the effectiveness of LiDAR-Adv. Please find more visualizations and physical experimental results on this website: https://sites.google.com/view/lidar-adv.



5 reasons why driver-less cars will not be a reality in India - Analytics Jobs

#artificialintelligence

A recent study shows that Indians want driverless cars more than anyone else in the world. However, are driverless cars going to be a reality in India? Last year in Dec18, Union Transports Minister, Mr. Nitin Gadkari announced that the Government will not entertain driverless car operations in India as it could result in employment loses for the masses. It could be one of the reasons. However, Is India ready for the driver-less cars now?


Do you see what AI sees? Study finds that humans can think like computers

#artificialintelligence

Even powerful computers, like those that guide self-driving cars, can be tricked into mistaking random scribbles for trains, fences, or school buses. It was commonly believed that people couldn't see how those images trip up computers, but in a new study, Johns Hopkins University researchers show most people actually can. The findings suggest modern computers may not be as different from humans as we think, demonstrating how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. The research appears today in the journal Nature Communications. "Most of the time, research in our field is about getting computers to think like people," says senior author Chaz Firestone, an assistant professor in Johns Hopkins' Department of Psychological and Brain Sciences.