Beyond In-Distribution Performance: A Cross-Dataset Study of Trajectory Prediction Robustness
Yao, Yue, Goehring, Daniel, Reichardt, Joerg
The robustness of trajectory prediction is essential for practical applications in autonomous driving. The advancement of trajectory prediction models is catalyzed through public motion datasets and associated competitions, such as Argoverse 2 (A2) [1], and Waymo Open Motion (WO) [2]. These competitions establish standardized metrics and test protocols and score predictions on test data that is withheld from all competitors and hosted on protected evaluation servers only. This is intended to objectively compare the generalization ability of models to unseen data. However, these withheld test examples still share similarities with the training samples, such as sensor setup, map representation, post-processing, geographic, and scenario selection biases employed during dataset creation. Consequently, the test scores reported in each competition are examples of In-Distribution (ID) testing. To effectively evaluate model generalization, it is essential to test models on truly Out-of-Distribution (OoD) test samples, such as those from different motion datasets. We investigate model generalization across two large-scale motion datasets [3]: Argoverse 2 (A2) and Waymo Open Motion (WO). The WO dataset, with 576k scenarios, is more than twice the size of A2, which contains 250k scenarios.
Jan-27-2025