Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards Artificial Intelligence

ABSTRACT Vision-based navigation of modern autonomous vehicles primarily depends on Deep Neural Network (DNN) based systems in which the controller obtains input from sensors/detectors such as cameras, and produces an output such as a steering wheel angle to navigate the vehicle safely in roadway traffic. Typically, these DNN-based systems are trained through supervised and/or transfer learning; however, recent studies show that these systems can be compromised by perturbation or adversarial input features on the trained DNN-based models. Similarly, this perturbation can be introduced into the autonomous vehicle DNN-based system by roadway hazards such as debris and roadblocks. In this study, we first introduce a roadway hazardous environment (both intentional and unintentional) that can compromise the DNN-based system of an autonomous vehicle, producing an incorrect vehicle navigational output such as a steering wheel angle, which can cause crashes resulting in fatality and injury. Then, we develop an approach based on object detection and semantic segmentation to mitigate the adverse effect of this hazardous environment, one that helps the autonomous vehicle to navigate safely around such hazards. This study finds the DNN-based model with hazardous object detection, and semantic segmentation improves the ability of an autonomous vehicle to avoid potential crashes by 21% compared to the traditional DNN-based autonomous driving system.

Deep Q-Learning for Same-Day Delivery with a Heterogeneous Fleet of Vehicles and Drones Machine Learning

In this paper, we consider same-day delivery with a heterogeneous fleet of vehicles and drones. Customers make delivery requests over the course of the day and the dispatcher dynamically dispatches vehicles and drones to deliver the goods to customers before their delivery deadline. Vehicles can deliver multiple packages in one route but travel relatively slowly due to the urban traffic. Drones travel faster, but they have limited capacity and require charging or battery swaps. To exploit the different strengths of the fleets, we propose a deep Q-learning approach. Our method learns the value of assigning a new customer to either drones or vehicles as well as the option to not offer service at all. To aid feature selection, we present an analytical analysis that demonstrates the role that different types of information have on the value function and decision making. In a systematic computational analysis, we show the superiority of our policy compared to benchmark policies and the effectiveness of our deep Q-learning approach.

Study finds AI systems exhibit human-like prejudices


Whether we like to believe it or not, scientific research has clearly shown that we all have deeply ingrained biases, which create stereotypes in our mind that can often lead to unfair treatment of others. As artificial intelligence (AI) plays an increasingly important role in our lives as decision makers in self-driving cars, doctor offices, and surveillance, it becomes critical to ask whether AI exhibits the same inbuilt biases as humans. According to a new study conducted by a team of researchers at Princeton, many AI systems do in fact exhibit racial and gender biases that could prove problematic in some cases. One well established way for psychologists to detect biases is the Implicit Association Test. Introduced into the scientific literature in 1998 and widely used today in clinical, cognitive, and developmental research, the test is designed to measure the strength of a person's automatic association between concepts or objects in memory.

A.I. camera could help self-driving cars 'see' better - Futurity


You are free to share this article under the Attribution 4.0 International license. Researchers have devised a new type of artificially intelligent camera system that can classify images faster and more energy-efficiently. The image recognition technology that underlies today's autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street, or a stopped car. The new camera could one day be small enough to fit in future electronic devices, something that is not possible today because of the size and slow speed of computers that can run artificial intelligence algorithms. "That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk," says Gordon Wetzstein, an assistant professor of electrical engineering at Stanford University who led the research.

Duke researchers use machine learning to defend personal information


Two Duke researchers have found a way to confuse machine learning systems, potentially revealing a new way to protect online privacy. Neil Gong, assistant professor of electrical and computer engineering, and Jinyuan Jia, a Ph.D. candidate in electrical and computer engineering, have displayed the potential for so-called "adversarial examples," or deliberately altered data, to confuse machine learning systems. This research could be used to fool attackers who use these systems to analyze user data. "We found that, since attackers are using machine learning to perform automated large-scale inference attacks, and the machine learning is vulnerable to those adversarial examples, we can leverage those adversarial examples to protect our privacy," Gong said. Machine learning systems are tools for statistical analysis.