Goto

Collaborating Authors

 Perrotton, Xavier


Conditional Vehicle Trajectories Prediction in CARLA Urban Environment

arXiv.org Artificial Intelligence

Imitation learning is becoming more and more successful for autonomous driving. End-to-end (raw signal to command) performs well on relatively simple tasks (lane keeping and navigation). Mid-to-mid (environment abstraction to mid-level trajectory representation) or direct perception (raw signal to performance) approaches strive to handle more complex, real life environment and tasks (e.g. In this work, we show that complex urban situations can be handled with raw signal input and mid-level representation. W e build a hybrid end-to-mid approach predicting trajectories for neighbor vehicles and for the ego vehicle with a conditional navigation goal. W e propose an original architecture inspired from social pooling LSTM taking low and mid level data as input and producing trajectories as polynomials of time. W e introduce a label augmentation mechanism to get the level of generalization that is required to control a vehicle. The performance is evaluated on CARLA 0.8 benchmark, showing significant improvements over previously published state of the art. 1. Introduction Modular pipelines [32] are the most used approach to autonomous driving. The advantage is that the modules are interpretable and relatively mature, in particular on the perception side with the success of deep learning for object detection ([13, 20] among many others). However, the complexity of the interactions in the real world causes the pipeline to be also complex, especially in the planning and decision modules.


End to End Vehicle Lateral Control Using a Single Fisheye Camera

arXiv.org Machine Learning

Convolutional neural networks are commonly used to control the steering angle for autonomous cars. Most of the time, multiple long range cameras are used to generate lateral failure cases. In this paper we present a novel model to generate this data and label augmentation using only one short range fisheye camera. We present our simulator and how it can be used as a consistent metric for lateral end-to-end control evaluation. Experiments are conducted on a custom dataset corresponding to more than 10000 km and 200 hours of open road driving. Finally we evaluate this model on real world driving scenarios, open road and a custom test track with challenging obstacle avoidance and sharp turns. In our simulator based on real-world videos, the final model was capable of more than 99% autonomy on urban road