Collaborating Authors

Motion Prediction using Trajectory Sets and Self-Driving Domain Knowledge Machine Learning

Predicting the future motion of vehicles has been studied using various techniques, including stochastic policies, generative models, and regression. Recent work has shown that classification over a trajectory set, which approximates possible motions, achieves state-of-the-art performance and avoids issues like mode collapse. However, map information and the physical relationships between nearby trajectories is not fully exploited in this formulation. We build on classification-based approaches to motion prediction by adding an auxiliary loss that penalizes off-road predictions. This auxiliary loss can easily be \emph{pretrained} using only map information (e.g., off-road area), which significantly improves performance on small datasets. We also investigate weighted cross-entropy losses to capture spatial-temporal relationships among trajectories. Our final contribution is a detailed comparison of classification and ordinal regression on two public self-driving datasets.

Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories Machine Learning

While the trajectory without the vehicle model appears reasonable, it is physically impossible for a two-axle vehicle to execute its motion in such manner because its rear wheels cannot turn. The proposed approach outputs a trajectory that is kinematically feasible and correctly predicts that the actor will encroach into the neighboring lane. We summarize the main contributions of our work below: - We combine powerful deep methods with a kinematic two-axle vehicle motion model in order to output trajectory predictions with guaranteed physical realism; - While the idea is general and applicable to any deep architecture, we present an example application to a recently proposed state-of-the-art motion prediction method, using raster-ized images of vehicle context as input to convolutional neural networks (CNNs) [7]; - We evaluate the method on a large-scale, real-world data set collected by a fleet of SDVs, showing that the system provides accurate, kinematically feasible predictions that outperform the existing state-of-the-art. 2 Related work 2.1 Motion prediction in autonomous driving Accurate motion prediction of other vehicles is a critical component in many autonomous driving systems [9, 10, 11]. Prediction provides an estimate of future world state, which can be used to plan an optimal path for the SDV through a dynamic traffic environment. The current state (e.g., position, speed, acceleration) of vehicles around a SDV can be estimated using techniques such as a Kalman filter (KF) [12, 13]. A common approach for short time horizon predictions of future motion is to assume that the driver will not change any control inputs (steering, accelerator) and simply propagate vehicle's current estimated state over time using a physical model (e.g., a vehicle motion model) that captures the underlying kinematics [9]. For longer time horizons the performance of this approach degrades as the underlying assumption of constant controls becomes increasingly unlikely.

Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization Machine Learning

One of the most critical pieces of the self-driving puzzle is the task of predicting future movement of surrounding traffic actors, which allows the autonomous vehicle to safely and effectively plan its future route in a complex world. Recently, a number of algorithms have been proposed to address this important problem, spurred by a growing interest of researchers from both industry and academia. Methods based on top-down scene rasterization on one side and Generative Adversarial Networks (GANs) on the other have shown to be particularly successful, obtaining state-of-the-art accuracies on the task of traffic movement prediction. In this paper we build upon these two directions and propose a raster-based conditional GAN architecture, powered by a novel differentiable rasterizer module at the input of the conditional discriminator that maps generated trajectories into the raster space in a differentiable manner. This simplifies the task for the discriminator as trajectories that are not scene-compliant are easier to discern, and allows the gradients to flow back forcing the generator to output better, more realistic trajectories. We evaluated the proposed method on a large-scale, real-world data set, showing that it outperforms state-of-the-art GAN-based baselines.

Spectral Temporal Graph Neural Network for Trajectory Prediction Artificial Intelligence

An effective understanding of the contextual environment and accurate motion forecasting of surrounding agents is crucial for the development of autonomous vehicles and social mobile robots. This task is challenging since the behavior of an autonomous agent is not only affected by its own intention, but also by the static environment and surrounding dynamically interacting agents. Previous works focused on utilizing the spatial and temporal information in time domain while not sufficiently taking advantage of the cues in frequency domain. To this end, we propose a Spectral Temporal Graph Neural Network (SpecTGNN), which can capture inter-agent correlations and temporal dependency simultaneously in frequency domain in addition to time domain. SpecTGNN operates on both an agent graph with dynamic state information and an environment graph with the features extracted from context images in two streams. The model integrates graph Fourier transform, spectral graph convolution and temporal gated convolution to encode history information and forecast future trajectories. Moreover, we incorporate a multi-head spatio-temporal attention mechanism to mitigate the effect of error propagation in a long time horizon. We demonstrate the performance of SpecTGNN on two public trajectory prediction benchmark datasets, which achieves state-of-the-art performance in terms of prediction accuracy.

Spatial-Temporal Block and LSTM Network for Pedestrian Trajectories Prediction Artificial Intelligence

Pedestrian trajectory prediction is a critical to avoid autonomous driving collision. But this prediction is a challenging problem due to social forces and cluttered scenes. Such human-human and human-space interactions lead to many socially plausible trajectories. In this paper, we propose a novel LSTM-based algorithm. We tackle the problem by considering the static scene and pedestrian which combine the Graph Convolutional Networks and Temporal Convolutional Networks to extract features from pedestrians. Each pedestrian in the scene is regarded as a node, and we can obtain the relationship between each node and its neighborhoods by graph embedding. It is LSTM that encode the relationship so that our model predicts nodes trajectories in crowd scenarios simultaneously. To effectively predict multiple possible future trajectories, we further introduce Spatio-Temporal Convolutional Block to make the network flexible. Experimental results on two public datasets, i.e. ETH and UCY, demonstrate the effectiveness of our proposed ST-Block and we achieve state-of-the-art approaches in human trajectory prediction.