Sample-efficient Imitative Multi-token Decision Transformer for Generalizable Real World Driving

Zhou, Hang, Xu, Dan, Ji, Yiding

arXiv.org Artificial Intelligence 

The realm of autonomous driving research has witnessed remarkable progress, with simulation technologies [1][2][3][4] reaching unprecedented levels of realism and the burgeoning availability of real-world driving datasets [5][6][7][8]. Despite these advancements, data-driven planning continues to confront a formidable obstacle: the infinite state space and extensive data distribution characteristic of real-world driving. Imitation learning approaches encounter hurdles [9][10] when presented with scenarios that deviate from the training distribution, exemplified by rare events like emergency braking for unforeseen obstacles. Similarly, these methods grapple with long-tail distribution phenomena, such as navigating through unexpected weather conditions or handling the erratic movements of a jaywalking pedestrian. On the other hand, reinforcement learning (RL) strategies aim to cultivate policies through reward-based learning. RL has difficulty bridging the sim-real gap and sampling efficiency [11].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found