Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards

arXiv.org Artificial Intelligence

ABSTRACT Vision-based navigation of modern autonomous vehicles primarily depends on Deep Neural Network (DNN) based systems in which the controller obtains input from sensors/detectors such as cameras, and produces an output such as a steering wheel angle to navigate the vehicle safely in roadway traffic. Typically, these DNN-based systems are trained through supervised and/or transfer learning; however, recent studies show that these systems can be compromised by perturbation or adversarial input features on the trained DNN-based models. Similarly, this perturbation can be introduced into the autonomous vehicle DNN-based system by roadway hazards such as debris and roadblocks. In this study, we first introduce a roadway hazardous environment (both intentional and unintentional) that can compromise the DNN-based system of an autonomous vehicle, producing an incorrect vehicle navigational output such as a steering wheel angle, which can cause crashes resulting in fatality and injury. Then, we develop an approach based on object detection and semantic segmentation to mitigate the adverse effect of this hazardous environment, one that helps the autonomous vehicle to navigate safely around such hazards. This study finds the DNN-based model with hazardous object detection, and semantic segmentation improves the ability of an autonomous vehicle to avoid potential crashes by 21% compared to the traditional DNN-based autonomous driving system.


Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

arXiv.org Machine Learning

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a "physically adversarial Stop Sign" can be synthesized such that the autonomous driving cars will misrecognize it as others (e.g., a speed limit sign). However, these image-based adversarial examples cannot easily alter 3D scans such as widely equipped LiDAR or radar on autonomous vehicles. In this paper, we reveal the potential vulnerabilities of LiDAR-based autonomous driving detection systems, by proposing an optimization based approach LiDAR-Adv to generate real-world adversarial objects that can evade the LiDAR-based detection systems under various conditions. We first explore the vulnerabilities of LiDAR using an evolutionbased blackbox attack algorithm, and then propose a strong attack strategy, using our gradient-based approach LiDAR-Adv. We test the generated adversarial objects on the Baidu Apollo autonomous driving platform and show that such physical systems are indeed vulnerable to the proposed attacks. We 3D-print our adversarial objects and perform physical experiments with LiDAR equipped cars to illustrate the effectiveness of LiDAR-Adv. Please find more visualizations and physical experimental results on this website: https://sites.google.com/view/lidar-adv.



5 reasons why driver-less cars will not be a reality in India - Analytics Jobs

#artificialintelligence

A recent study shows that Indians want driverless cars more than anyone else in the world. However, are driverless cars going to be a reality in India? Last year in Dec18, Union Transports Minister, Mr. Nitin Gadkari announced that the Government will not entertain driverless car operations in India as it could result in employment loses for the masses. It could be one of the reasons. However, Is India ready for the driver-less cars now?


Responding to Challenges in the Design of Moral Autonomous Vehicles

AAAI Conferences

One major example of promising ‘smart’ technology in the public sector is the autonomous vehicle (AV). AVs are expected to yield numerous social benefits, such as increasing traffic efficiency, decreasing pollution, and decreasing traffic accidents by 90%. However, a recent 2016 study published by Bonnefon et al. argued that manufacturers and regulators face a major design challenge of balancing competing public preferences: a moral preference for “utilitarian” algorithms; a consumer preference for vehicles that prioritize passenger safety; and a policy preference for minimum government regulation of vehicle algorithm design. Our paper responds to the 2016 study, calling into question the importance of explicitly moral algorithms and the seriousness of the challenge identified by Bonnefon et al. We conclude that the ‘social dilemma’ is probably overstated. Given that attempts to resolve the ‘social dilemma’ are likely to delay the rollout of socially beneficial AVs, we implore the need for further research validating Bonnefon et al.’s conclusions and encourage manufacturers and regulators to commercialize AVs as soon as possible. We discuss the implications of this example for AV’s for the larger context of Cognitive Assistance in other application areas and the government and public policies that are being discussed.