Khan, Qadeer
MA-DV2F: A Multi-Agent Navigation Framework using Dynamic Velocity Vector Field
Ma, Yining, Khan, Qadeer, Cremers, Daniel
In this paper we propose MA-DV2F: Multi-Agent Dynamic Velocity Vector Field. It is a framework for simultaneously controlling a group of vehicles in challenging environments. DV2F is generated for each vehicle independently and provides a map of reference orientation and speed that a vehicle must attain at any point on the navigation grid such that it safely reaches its target. The field is dynamically updated depending on the speed and proximity of the ego-vehicle to other agents. This dynamic adaptation of the velocity vector field allows prevention of imminent collisions. Experimental results show that MA-DV2F outperforms concurrent methods in terms of safety, computational efficiency and accuracy in reaching the target when scaling to a large number of vehicles. Project page for this work can be found here: https://yininghase.github.io/MA-DV2F/
Robust Autonomous Vehicle Pursuit without Expert Steering Labels
Pan, Jiaxin, Zhou, Changyao, Gladkova, Mariia, Khan, Qadeer, Cremers, Daniel
In this work, we present a learning method for lateral and longitudinal motion control of an ego-vehicle for vehicle pursuit. The car being controlled does not have a pre-defined route, rather it reactively adapts to follow a target vehicle while maintaining a safety distance. To train our model, we do not rely on steering labels recorded from an expert driver but effectively leverage a classical controller as an offline label generation tool. In addition, we account for the errors in the predicted control values, which can lead to a loss of tracking and catastrophic crashes of the controlled vehicle. To this end, we propose an effective data augmentation approach, which allows to train a network capable of handling different views of the target vehicle. During the pursuit, the target vehicle is firstly localized using a Convolutional Neural Network. The network takes a single RGB image along with cars' velocities and estimates the target vehicle's pose with respect to the ego-vehicle. This information is then fed to a Multi-Layer Perceptron, which regresses the control commands for the ego-vehicle, namely throttle and steering angle. We extensively validate our approach using the CARLA simulator on a wide range of terrains. Our method demonstrates real-time performance and robustness to different scenarios including unseen trajectories and high route completion. The project page containing code and multimedia can be publicly accessed here: https://changyaozhou.github.io/Autonomous-Vehicle-Pursuit/.
Multi Agent Navigation in Unconstrained Environments using a Centralized Attention based Graphical Neural Network Controller
Ma, Yining, Khan, Qadeer, Cremers, Daniel
Abstract-- In this work, we propose a learning based neural model that provides both the longitudinal and lateral control commands to simultaneously navigate multiple vehicles. The goal is to ensure that each vehicle reaches a desired target state without colliding with any other vehicle or obstacle in an unconstrained environment. The model utilizes an attention based Graphical Neural Network paradigm that takes into consideration the state of all the surrounding vehicles to make an informed decision. This allows each vehicle to smoothly reach its destination while also evading collision with the other agents. The data and corresponding labels for training such a network is obtained using an optimization based procedure. Our method also outperforms comparable graphical neural network architectures. Meanwhile, the rectangles with broken boundaries represents the desired destination/target I. INTRODUCTION We would like to produce the Data driven approaches to senorimotor control have seen a sequence of control actions such that the five vehicles safely meteoric growth with the advent of deep learning in the last reach their destination state without colliding with each other decade [1], [2], [3], [4]. Powerful neural network architectures or the circled obstacle. These control actions are produced can now be trained and deployed in real-time applications by the Attention Based Graphical Neural Network (A-GNN).
Multi-Vehicle Trajectory Prediction at Intersections using State and Intention Information
Zhu, Dekai, Khan, Qadeer, Cremers, Daniel
Traditional approaches to prediction of future trajectory of road agents rely on knowing information about their past trajectory. This work rather relies only on having knowledge of the current state and intended direction to make predictions for multiple vehicles at intersections. Furthermore, message passing of this information between the vehicles provides each one of them a more holistic overview of the environment allowing for a more informed prediction. This is done by training a neural network which takes the state and intent of the multiple vehicles to predict their future trajectory. Using the intention as an input allows our approach to be extended to additionally control the multiple vehicles to drive towards desired paths. Experimental results demonstrate the robustness of our approach both in terms of trajectory prediction and vehicle control at intersections. The complete training and evaluation code for this work is available here: \url{https://github.com/Dekai21/Multi_Agent_Intersection}.
Towards Generalizing Sensorimotor Control Across Weather Conditions
Khan, Qadeer, Wenzel, Patrick, Cremers, Daniel, Leal-Taixé, Laura
The ability of deep learning models to generalize well across different scenarios depends primarily on the quality and quantity of annotated data. Labeling large amounts of data for all possible scenarios that a model may encounter would not be feasible; if even possible. We propose a framework to deal with limited labeled training data and demonstrate it on the application of vision-based vehicle control. We show how limited steering angle data available for only one condition can be transferred to multiple different weather scenarios. This is done by leveraging unlabeled images in a teacher-student learning paradigm complemented with an image-to-image translation network. The translation network transfers the images to a new domain, whereas the teacher provides soft supervised targets to train the student on this domain. Furthermore, we demonstrate how utilization of auxiliary networks can reduce the size of a model at inference time, without affecting the accuracy. The experiments show that our approach generalizes well across multiple different weather conditions using only ground truth labels from one domain.
Towards Self-Supervised High Level Sensor Fusion
Khan, Qadeer, Schön, Torsten, Wenzel, Patrick
Abstract--In this paper, we present a framework to control a self-driving car by fusing raw information from RGB images and depth maps. A deep neural network architecture is used for mapping the vision and depth information, respectively, to steering commands. This fusion of information from two sensor sources allows to provide redundancy and fault tolerance in the presence of sensor failures. Even if one of the input sensors fails to produce the correct output, the other functioning sensor would still be able to maneuver the car. Such redundancy is crucial in the critical application of self-driving cars. The experimental results have showed that our method is capable of learning to use the relevant sensor information even when one of the sensors fail without any explicit signal. I. INTRODUCTION The fusion of different sensor modalities in the context of autonomous driving is a crucial aspect in order to be robust against sensor failures. We consider RGB and Depth camera sensors for training a control module to maneuver a self-driving car.
Latent Space Reinforcement Learning for Steering Angle Prediction
Khan, Qadeer, Schön, Torsten, Wenzel, Patrick
Abstract--Model-free reinforcement learning has recently been shown to successfully learn navigation policies from raw sensor data. In this work, we address the problem of learning driving policies for an autonomous agent in a high-fidelity simulator. Building upon recent research that applies deep reinforcement learning to navigation problems, we present a modular deep reinforcement learning approach to predict the steering angle of the car from raw images. The control module trained with reinforcement learning takes the latent vector as input to predict the correct steering angle. The experimental results have showed that our method is capable of learning to maneuver the car without any human control signals. I. INTRODUCTION Reinforcement learning (RL) is gaining interest as a promising avenueto training end-to-end autonomous driving policies. These algorithms have recently been shown to solve complex tasks such as navigation from raw vision-sensor modalities. However, training those algorithms require vast amounts of data and interactions with the environment to cover a wide variety of driving scenarios. The collection of such data if even possible is costly and time-consuming.
Semantic Label Reduction Techniques for Autonomous Driving
Khan, Qadeer, Schön, Torsten, Wenzel, Patrick
Abstract--Semantic segmentation maps can be used as input to models for maneuvering the controls of a car. However, not all labels may be necessary for making the control decision. One would expect that certain labels such as road lanes or sidewalks would be more critical in comparison with labels for vegetation or buildings which may not have a direct influence on the car's driving decision. In this appendix, we evaluate and quantify how sensitive and important the different semantic labels are for controlling the car. Labels that do not influence the driving decision are remapped to other classes, thereby simplifying the task by reducing to only labels critical for driving of the vehicle.
Modular Vehicle Control for Transferring Semantic Information to Unseen Weather Conditions using GANs
Wenzel, Patrick, Khan, Qadeer, Cremers, Daniel, Leal-Taixé, Laura
End-to-end supervised learning has shown promising results for self-driving cars, particularly under conditions for which it was trained. However, it may not necessarily perform well under unseen conditions. In this paper, we demonstrate how knowledge can be transferred from one weather condition for which semantic labels and steering commands are available to a completely new set of conditions for which we have no access to labeled data. The problem is addressed by dividing the task of vehicle control into independent perception and control modules, such that changing one does not affect the other. We train the control module only on the data for the available condition and keep it fixed even under new conditions. The perception module is then used as an interface between the new weather conditions and this control model. The perception module in turn is trained using semantic labels, which we assume are already available for the same weather condition on which the control model was trained. However, obtaining them for other conditions is a tedious and error-prone process. Therefore, we propose to use a generative adversarial network (GAN)-based model to retrieve the semantic information for the new conditions in an unsupervised manner. We introduce a master-servant architecture, where the master model (semantic labels available) trains the servant model (semantic labels not available). The servant model can then be used for steering the vehicle without retraining the control module.