Cui, Henggang
Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories
Cui, Henggang, Nguyen, Thi, Chou, Fang-Chieh, Lin, Tsung-Han, Schneider, Jeff, Bradley, David, Djuric, Nemanja
While the trajectory without the vehicle model appears reasonable, it is physically impossible for a two-axle vehicle to execute its motion in such manner because its rear wheels cannot turn. The proposed approach outputs a trajectory that is kinematically feasible and correctly predicts that the actor will encroach into the neighboring lane. We summarize the main contributions of our work below: - We combine powerful deep methods with a kinematic two-axle vehicle motion model in order to output trajectory predictions with guaranteed physical realism; - While the idea is general and applicable to any deep architecture, we present an example application to a recently proposed state-of-the-art motion prediction method, using raster-ized images of vehicle context as input to convolutional neural networks (CNNs) [7]; - We evaluate the method on a large-scale, real-world data set collected by a fleet of SDVs, showing that the system provides accurate, kinematically feasible predictions that outperform the existing state-of-the-art. 2 Related work 2.1 Motion prediction in autonomous driving Accurate motion prediction of other vehicles is a critical component in many autonomous driving systems [9, 10, 11]. Prediction provides an estimate of future world state, which can be used to plan an optimal path for the SDV through a dynamic traffic environment. The current state (e.g., position, speed, acceleration) of vehicles around a SDV can be estimated using techniques such as a Kalman filter (KF) [12, 13]. A common approach for short time horizon predictions of future motion is to assume that the driver will not change any control inputs (steering, accelerator) and simply propagate vehicle's current estimated state over time using a physical model (e.g., a vehicle motion model) that captures the underlying kinematics [9]. For longer time horizons the performance of this approach degrades as the underlying assumption of constant controls becomes increasingly unlikely.
Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks
Cui, Henggang, Radosavljevic, Vladan, Chou, Fang-Chieh, Lin, Tsung-Han, Nguyen, Thi, Huang, Tzu-Kuo, Schneider, Jeff, Djuric, Nemanja
Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. However, despite large interest and a number of industry players working in the autonomous domain, there is still more to be done in order to develop a system capable of operating at a level comparable to best human drivers. One reason for this is high uncertainty of traffic behavior and large number of situations that an SDV may encounter on the roads, making it very difficult to create a fully generalizable system. To ensure safe and efficient operations, an autonomous vehicle is required to account for this uncertainty and to anticipate a multitude of possible behaviors of traffic actors in its surrounding. In this work, we address this critical problem and present a method to predict multiple possible trajectories of actors while also estimating their probabilities. The method encodes each actor's surrounding context into a raster image, used as input by deep convolutional networks to automatically derive relevant features for the task. Following extensive offline evaluation and comparison to state-of-the-art baselines, as well as closed course tests, the method was successfully deployed to a fleet of SDVs.
Motion Prediction of Traffic Actors for Autonomous Driving using Deep Convolutional Networks
Djuric, Nemanja, Radosavljevic, Vladan, Cui, Henggang, Nguyen, Thi, Chou, Fang-Chieh, Lin, Tsung-Han, Schneider, Jeff
Recent algorithmic improvements and hardware breakthroughs resulted in a number of success stories in the field of AI impacting our daily lives. However, despite its ubiquity AI is only just starting to make advances in what may arguably have the largest impact thus far, the nascent field of autonomous driving. In this work we discuss this important topic and address one of crucial aspects of the emerging area, the problem of predicting future state of autonomous vehicle's surrounding necessary for safe and efficient operations. We introduce a deep learning-based approach that takes into account current state of traffic actors and produces rasterized representations of each actor's vicinity. The raster images are then used by deep convolutional models to infer future movement of actors while accounting for inherent uncertainty of the prediction task. Extensive experiments on real-world data strongly suggest benefits of the proposed approach. Moreover, following successful tests the system was deployed to a fleet of autonomous vehicles.
MLtuner: System Support for Automatic Machine Learning Tuning
Cui, Henggang, Ganger, Gregory R., Gibbons, Phillip B.
MLtuner automatically tunes settings for training tunables (such as the learning rate, the momentum, the mini-batch size, and the data staleness bound) that have a significant impact on large-scale machine learning (ML) performance. Traditionally, these tunables are set manually, which is unsurprisingly error-prone and difficult to do without extensive domain knowledge. MLtuner uses efficient snapshotting, branching, and optimization-guided online trial-and-error to find good initial settings as well as to re-tune settings during execution. Experiments show that MLtuner can robustly find and re-tune tunable settings for a variety of ML applications, including image classification (for 3 models and 2 datasets), video classification, and matrix factorization. Compared to state-of-the-art ML auto-tuning approaches, MLtuner is more robust for large problems and over an order of magnitude faster.
More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server
Ho, Qirong, Cipar, James, Cui, Henggang, Lee, Seunghak, Kim, Jin Kyu, Gibbons, Phillip B., Gibson, Garth A., Ganger, Greg, Xing, Eric P.
We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model's values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes.