Goto

Collaborating Authors

 driving situation


Risk-Based Filtering of Valuable Driving Situations in the Waymo Open Motion Dataset

Puphal, Tim, Ramtekkar, Vipul, Nishimiya, Kenji

arXiv.org Artificial Intelligence

Improving automated vehicle software requires driving data rich in valuable road user interactions. In this paper, we propose a risk-based filtering approach that helps identify such valuable driving situations from large datasets. Specifically, we use a probabilistic risk model to detect high-risk situations. Our method stands out by considering a) first-order situations (where one vehicle directly influences another and induces risk) and b) second-order situations (where influence propagates through an intermediary vehicle). In experiments, we show that our approach effectively selects valuable driving situations in the Waymo Open Motion Dataset. Compared to the two baseline interaction metrics of Kalman difficulty and Tracks-To-Predict (TTP), our filtering approach identifies complex and complementary situations, enriching the quality in automated vehicle testing. The risk data is made open-source: https://github.com/HRI-EU/RiskBasedFiltering.


Markov Switching Model for Driver Behavior Prediction: Use cases on Smartphones

Zaky, Ahmed B., Khamis, Mohamed A., Gomaa, Walid

arXiv.org Artificial Intelligence

Several intelligent transportation systems focus on studying the various driver behaviors for numerous objectives. This includes the ability to analyze driver actions, sensitivity, distraction, and response time. As the data collection is one of the major concerns for learning and validating different driving situations, we present a driver behavior switching model validated by a low-cost data collection solution using smartphones. The proposed model is validated using a real dataset to predict the driver behavior in short duration periods. A literature survey on motion detection (specifically driving behavior detection using smartphones) is presented. Multiple Markov Switching Variable Auto-Regression (MSVAR) models are implemented to achieve a sophisticated fitting with the collected driver behavior data. This yields more accurate predictions not only for driver behavior but also for the entire driving situation. The performance of the presented models together with a suitable model selection criteria is also presented. The proposed driver behavior prediction framework can potentially be used in accident prediction and driver safety systems.


Machine Learning Based Prediction of Future Stress Events in a Driving Scenario

Clark, Joseph, Nath, Rajdeep Kumar, Thapliyal, Himanshu

arXiv.org Artificial Intelligence

This paper presents a model for predicting a driver's stress level up to one minute in advance. Successfully predicting future stress would allow stress mitigation to begin before the subject becomes stressed, reducing or possibly avoiding the performance penalties of stress. The proposed model takes features extracted from Galvanic Skin Response (GSR) signals on the foot and hand and Respiration and Electrocardiogram (ECG) signals from the chest of the driver. The data used to train the model was retrieved from an existing database and then processed to create statistical and frequency features. A total of 42 features were extracted from the data and then expanded into a total of 252 features by grouping the data and taking six statistical measurements of each group for each feature. A Random Forest Classifier was trained and evaluated using a leave-one-subject-out testing approach. The model achieved 94% average accuracy on the test data. Results indicate that the model performs well and could be used as part of a vehicle stress prevention system.


10 Creative Safety Features For Driverless Car One Must Know

#artificialintelligence

For a few years now, research and development in driverless cars have evolved tremendously, and several intuitive and creative changes have been witnessed. There have been various demonstrations of these autonomous vehicles by the auto giants both on and off roads. However, the measure of safety has always been a serious concern in self-driving cars. There have been severe accidents during the trials on-road of various driverless cars. For instance, in 2016, a Tesla driver died in a fatal crash while using autopilot mode.


Driving Style Encoder: Situational Reward Adaptation for General-Purpose Planning in Automated Driving

Rosbach, Sascha, James, Vinit, Großjohann, Simon, Homoceanu, Silviu, Li, Xing, Roth, Stefan

arXiv.org Artificial Intelligence

General-purpose planning algorithms for automated driving combine mission, behavior, and local motion planning. Such planning algorithms map features of the environment and driving kinematics into complex reward functions. To achieve this, planning experts often rely on linear reward functions. The specification and tuning of these reward functions is a tedious process and requires significant experience. Moreover, a manually designed linear reward function does not generalize across different driving situations. In this work, we propose a deep learning approach based on inverse reinforcement learning that generates situation-dependent reward functions. Our neural network provides a mapping between features and actions of sampled driving policies of a model-predictive control-based planner and predicts reward functions for upcoming planning cycles. In our evaluation, we compare the driving style of reward functions predicted by our deep network against clustered and linear reward functions. Our proposed deep learning approach outperforms clustered linear reward functions and is at par with linear reward functions with a-priori knowledge about the situation.


How does Artificial Intelligence (AI) work in Autonomous Vehicles? - Cyblance

#artificialintelligence

AI has become a popular word tech industry, does it actually work in autonomous vehicles? To understand the AI, first, we need to understand the human viewpoint of driving a vehicle with the use of sensory functions such as vision and sound to watch the road and the other vehicles on the road. When we halt at a red light or wait for a pedestrian to cross the road, we are using our memory to make these quick decisions. The years of driving practice lead us to look for the little things that we face often on the roads -- it could be a nice route to the home that might avoid the bumps in the road. We are building autonomous vehicles that drive themselves, but we want them to drive as human drivers do.


Rapidly Adapting Artificial Neural Networks for Autonomous Navigation

Pomerleau, Dean

Neural Information Processing Systems

Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.


Rapidly Adapting Artificial Neural Networks for Autonomous Navigation

Pomerleau, Dean

Neural Information Processing Systems

Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.


Rapidly Adapting Artificial Neural Networks for Autonomous Navigation

Pomerleau, Dean

Neural Information Processing Systems

Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.