EVLoc: Event-based Visual Localization in LiDAR Maps via Event-Depth Registration

Chen, Kuangyi, Zhang, Jun, Fraundorfer, Friedrich

arXiv.org Artificial Intelligence 

EVLoc: Event-based Visual Localization in LiDAR Maps via Event-Depth Registration Kuangyi Chen Jun Zhang Friedrich Fraundorfer Abstract -- Event cameras are bio-inspired sensors with some notable features, including high dynamic range and low latency, which makes them exceptionally suitable for perception in challenging scenarios such as high-speed motion and extreme lighting conditions. In this paper, we explore their potential for localization within pre-existing LiDAR maps, a critical task for applications that require precise navigation and mobile manipulation. Our framework follows a paradigm based on the refinement of an initial pose. Specifically, we first project LiDAR points into 2D space based on a rough initial pose to obtain depth maps, and then employ an optical flow estimation network to align events with LiDAR points in 2D space, followed by camera pose estimation using a PnP solver . T o enhance geometric consistency between these two inherently different modalities, we develop a novel frame-based event representation that improves structural clarity. Additionally, given the varying degrees of bias observed in the ground truth poses, we design a module that predicts an auxiliary variable as a regularization term to mitigate the impact of this bias on network convergence. Experimental results on several public datasets demonstrate the effectiveness of our proposed method. T o facilitate future research, both the code and the pre-trained models are made available online 1 . I. I NTRODUCTION Accurate localization techniques are essential for autonomous robots, such as self-driving vehicles and drones.