Multi-Space Alignments Towards Universal LiDAR Segmentation
Liu, Youquan, Kong, Lingdong, Wu, Xiaoyang, Chen, Runnan, Li, Xin, Pan, Liang, Liu, Ziwei, Ma, Yuexin
–arXiv.org Artificial Intelligence
A unified and versatile LiDAR segmentation model with strong robustness and generalizability is desirable for safe autonomous driving perception. This work presents M3Net, a one-of-a-kind framework for fulfilling multi-task, multi-dataset, multi-modality LiDAR segmentation in a universal manner using just a single set of parameters. To better exploit data volume and diversity, we first combine large-scale driving datasets acquired by different types of sensors from diverse scenes and then conduct alignments in three spaces, namely data, feature, and label spaces, during the training. As a result, M3Net is capable of taming heterogeneous data for training state-of-the-art LiDAR segmentation models. Extensive experiments on twelve LiDAR segmentation datasets verify our effectiveness. Notably, using a shared set of parameters, M3Net achieves 75.1%, 83.1%, and 72.4% mIoU scores, respectively, on the official benchmarks of SemanticKITTI, nuScenes, and Waymo Open.
arXiv.org Artificial Intelligence
May-2-2024
- Genre:
- Research Report (0.64)
- Industry:
- Transportation > Ground > Road (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks (0.67)
- Natural Language (0.93)
- Representation & Reasoning (1.00)
- Robots > Autonomous Vehicles (0.89)
- Vision (1.00)
- Sensing and Signal Processing (0.94)
- Artificial Intelligence
- Information Technology