A Probabilistic Formulation of LiDAR Mapping with Neural Radiance Fields
McDermott, Matthew, Rife, Jason
–arXiv.org Artificial Intelligence
This work has been submitted to the IEEE for possible publication. Abstract -- In this paper we reexamine the process through which a Neural Radiance Field (NeRF) can be trained to produce novel LiDAR views of a scene. Unlike image applications where camera pixels integrate light over time, LiDAR pulses arrive at specific times. As such, multiple LiDAR returns are possible for any given detector and the classification of these returns is inherently probabilistic. Applying a traditional NeRF training routine can result in the network learning "phantom surfaces" in free space between conflicting range measurements, similar to how "floater" aberrations may be produced by an image model. Code is available at https://github.com/mcdermatt/PLINK Neural Radience Fields (NeRFs) provide continuous representations of scenes by storing information about the surrounding world inside the weights of a neural network [1]. Recent works have extended NeRFs from camera images to LiDAR point clouds for use in localization [2], odometry [3], path planning [4], and data augmentation [5, 6]. To date, LiDAR applications of NeRFs have assumed a deterministic model of the scene.
arXiv.org Artificial Intelligence
Nov-3-2024
- Country:
- North America > United States > Massachusetts (0.28)
- Genre:
- Research Report (0.82)
- Industry:
- Government (0.46)
- Transportation (0.46)
- Technology: