Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning

arXiv.org Machine Learning

This study proposes a framework for human-like autonomous car-following planning based on deep reinforcement learning (deep RL). Historical driving data are fed into a simulation environment where an RL agent learns from trial and error interactions based on a reward function that signals how much the agent deviates from the empirical data. Through these interactions, an optimal policy, or car-following model that maps in a human-like way from speed, relative speed between a lead and following vehicle, and inter-vehicle spacing to acceleration of a following vehicle is finally obtained. The model can be continuously updated when more data are fed in. Two thousand car-following periods extracted from the 2015 Shanghai Naturalistic Driving Study were used to train the model and compare its performance with that of traditional and recent data-driven car-following models. As shown by this study results, a deep deterministic policy gradient car-following model that uses disparity between simulated and observed speed as the reward function and considers a reaction delay of 1s, denoted as DDPGvRT, can reproduce human-like car-following behavior with higher accuracy than traditional and recent data-driven car-following models. Specifically, the DDPGvRT model has a spacing validation error of 18% and speed validation error of 5%, which are less than those of other models, including the intelligent driver model, models based on locally weighted regression, and conventional neural network-based models. Moreover, the DDPGvRT demonstrates good capability of generalization to various driving situations and can adapt to different drivers by continuously learning. This study demonstrates that reinforcement learning methodology can offer insight into driver behavior and can contribute to the development of human-like autonomous driving algorithms and traffic-flow models.


Artificial intelligence virtual consultant helps deliver better patient care

#artificialintelligence

WASHINGTON, DC (March 8, 2017)--Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients' interventional radiology care, according to research presented today at the Society of Interventional Radiology's 2017 Annual Scientific Meeting. The researchers used cutting-edge artificial intelligence to create a "chatbot" interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment. "We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care," said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA's David Geffen School of Medicine and one of the authors of the study. "Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care."


Vision-based Navigation of Autonomous Vehicle in Roadway Environments with Unexpected Hazards

arXiv.org Artificial Intelligence

ABSTRACT Vision-based navigation of modern autonomous vehicles primarily depends on Deep Neural Network (DNN) based systems in which the controller obtains input from sensors/detectors such as cameras, and produces an output such as a steering wheel angle to navigate the vehicle safely in roadway traffic. Typically, these DNN-based systems are trained through supervised and/or transfer learning; however, recent studies show that these systems can be compromised by perturbation or adversarial input features on the trained DNN-based models. Similarly, this perturbation can be introduced into the autonomous vehicle DNN-based system by roadway hazards such as debris and roadblocks. In this study, we first introduce a roadway hazardous environment (both intentional and unintentional) that can compromise the DNN-based system of an autonomous vehicle, producing an incorrect vehicle navigational output such as a steering wheel angle, which can cause crashes resulting in fatality and injury. Then, we develop an approach based on object detection and semantic segmentation to mitigate the adverse effect of this hazardous environment, one that helps the autonomous vehicle to navigate safely around such hazards. This study finds the DNN-based model with hazardous object detection, and semantic segmentation improves the ability of an autonomous vehicle to avoid potential crashes by 21% compared to the traditional DNN-based autonomous driving system.


Researchers use iPad to hail driverless taxi - Roadshow

AITopics Original Links

Hot on the heels of Google's robot cars, a team of German researchers at AutoNOMOS Labs in Berlin's Free University have upped the ante and unveiled the driverless taxi. Imagine never arguing about the most efficient route or mentally debating the merits of tipping a driver whose ineptitude at the wheel almost killed you. Made in Germany (MIG) is an autonomous Volkswagen Passat cab you hail using an iPad app, and it eliminates the most unappealing aspect of taxis: the driver. MIG is equipped with GPS navigation, video cameras, laser scanners, sensors and radars that it uses to construct a 3D map of its surroundings. It uses this map to detect pedestrians and other vehicles as it navigates the road.


Do you see what AI sees? Study finds that humans can think like computers

#artificialintelligence

Even powerful computers, like those that guide self-driving cars, can be tricked into mistaking random scribbles for trains, fences, or school buses. It was commonly believed that people couldn't see how those images trip up computers, but in a new study, Johns Hopkins University researchers show most people actually can. The findings suggest modern computers may not be as different from humans as we think, demonstrating how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. The research appears today in the journal Nature Communications. "Most of the time, research in our field is about getting computers to think like people," says senior author Chaz Firestone, an assistant professor in Johns Hopkins' Department of Psychological and Brain Sciences.