Multi-modal Perception Dataset of In-water Objects for Autonomous Surface Vehicles

Jeong, Mingi, Chadda, Arihant, Ren, Ziang, Zhao, Luyang, Liu, Haowen, Roznere, Monika, Zhang, Aiwei, Jiang, Yitao, Achong, Sabriel, Lensgraf, Samuel, Li, Alberto Quattrini

arXiv.org Artificial Intelligence 

Abstract-- This paper introduces the first publicly accessible multi-modal perception dataset for autonomous maritime navigation, focusing on in-water obstacles within the aquatic environment to enhance situational awareness for Autonomous Surface Vehicles (ASVs). This dataset, consisting of diverse objects encountered under varying environmental conditions, aims to bridge the research gap in marine robotics by providing a multi-modal, annotated, and ego-centric perception dataset, for object detection and classification. We also show the applicability of the proposed dataset's framework using deep learning-based open-source perception algorithms that have shown success. We expect that our dataset will contribute to development of the marine autonomy pipeline and marine (field) robotics. I. INTRODUCTION A significant limitation in the research on autonomous vessels, naturally rely on multi-modal data for situational maritime navigation is the lack of relevant multi-modal awareness, which aligns with the regulations (e.g., rule 5 perception data.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found