Zenseact Open Dataset: A large-scale and diverse multimodal dataset for autonomous driving

Alibeigi, Mina, Ljungbergh, William, Tonderski, Adam, Hess, Georg, Lilja, Adam, Lindstrom, Carl, Motorniuk, Daria, Fu, Junsheng, Widahl, Jenny, Petersson, Christoffer

arXiv.org Artificial Intelligence 

To address this gap, we introduce Zenseact Open Dataset (ZOD), a largescale and diverse multimodal dataset collected over two years in various European countries, covering an area 9 that of existing datasets. ZOD boasts the highest range and resolution sensors among comparable datasets, coupled with detailed keyframe annotations for 2D and 3D objects (up to 245m), road instance/semantic segmentation, traffic sign recognition, and road classification. We believe that this unique combination will facilitate breakthroughs in long-range perception and multi-task learning. The dataset is composed of Frames, Sequences, and Drives, designed to encompass both data diversity and support for spatiotemporal learning, sensor fusion, localization, and mapping. Frames consist of 100k curated camera images with two seconds of other supporting sensor data, while the 1473 Sequences and 29 Drives include the entire sensor suite for 20 seconds and a few minutes, respectively. ZOD is the only large-scale AD dataset released under a permissive license, allowing for both research and commercial use. More information, and an extensive devkit, can be found Figure 1: Geographical coverage comparison with other at zod.zenseact.com. AD datasets using the diversity area metric defined in [27] (top left), and geographical distribution of ZOD Frames overlaid on the map.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found