Lang, Fengtian
Uni-Gaussians: Unifying Camera and Lidar Simulation with Gaussians for Dynamic Driving Scenarios
Yuan, Zikang, Pu, Yuechuan, Luo, Hongcheng, Lang, Fengtian, Chi, Cheng, Li, Teng, Shen, Yingying, Sun, Haiyang, Wang, Bing, Yang, Xin
Ensuring the safety of autonomous vehicles necessitates comprehensive simulation of multi-sensor data, encompassing inputs from both cameras and LiDAR sensors, across various dynamic driving scenarios. Neural rendering techniques, which utilize collected raw sensor data to simulate these dynamic environments, have emerged as a leading methodology. While NeRF-based approaches can uniformly represent scenes for rendering data from both camera and LiDAR, they are hindered by slow rendering speeds due to dense sampling. Conversely, Gaussian Splatting-based methods employ Gaussian primitives for scene representation and achieve rapid rendering through rasterization. However, these rasterization-based techniques struggle to accurately model non-linear optical sensors. This limitation restricts their applicability to sensors beyond pinhole cameras. To address these challenges and enable unified representation of dynamic driving scenarios using Gaussian primitives, this study proposes a novel hybrid approach. Our method utilizes rasterization for rendering image data while employing Gaussian ray-tracing for LiDAR data rendering. Experimental results on public datasets demonstrate that our approach outperforms current state-of-the-art methods. This work presents a unified and efficient solution for realistic simulation of camera and LiDAR data in autonomous driving scenarios using Gaussian primitives, offering significant advancements in both rendering quality and computational efficiency.
Direct Sparse Odometry with Continuous 3D Gaussian Maps for Indoor Environments
Deng, Jie, Lang, Fengtian, Yuan, Zikang, Yang, Xin
Direct Sparse Odometry with Continuous 3D Gaussian Maps for Indoor Environments Jie Deng 1, Fengtian Lang 1, Zikang Y uan 2 and Xin Y ang 1 Abstract -- Accurate localization is essential for robotics and augmented reality applications such as autonomous navigation. Vision-based methods combining prior maps aim to integrate LiDAR-level accuracy with camera cost efficiency for robust pose estimation. Existing approaches, however, often depend on unreliable interpolation procedures when associating discrete point cloud maps with dense image pixels, which inevitably introduces depth errors and degrades pose estimation accuracy. We propose a monocular visual odometry framework utilizing a continuous 3D Gaussian map, which directly assigns geometrically consistent depth values to all extracted high-gradient points without interpolation. Evaluations on two public datasets demonstrate superior tracking accuracy compared to existing methods. We have released the source code of this work for the development of the community. I NTRODUCTION Visual odometry (VO)/visual-inertial odometry (VIO) is a crucial capability in a wide range of technologies, including robotics, unmanned aerial vehicles and mixed reality.
LIWO: Lidar-Inertial-Wheel Odometry
Yuan, Zikang, Lang, Fengtian, Xu, Tianle, Yang, Xin
LiDAR-inertial odometry (LIO), which fuses complementary information of a LiDAR and an Inertial Measurement Unit (IMU), is an attractive solution for state estimation. In LIO, both pose and velocity are regarded as state variables that need to be solved. However, the widely-used Iterative Closest Point (ICP) algorithm can only provide constraint for pose, while the velocity can only be constrained by IMU pre-integration. As a result, the velocity estimates inclined to be updated accordingly with the pose results. In this paper, we propose LIWO, an accurate and robust LiDAR-inertialwheel (LIW) odometry, which fuses the measurements from LiDAR, IMU and wheel encoder in a bundle adjustment (BA) based optimization framework. The involvement of a wheel encoder could provide velocity measurement as an important observation, which assists LIO to provide a more accurate state prediction. In addition, constraining the velocity variable by the observation from wheel encoder in optimization can further improve the accuracy of state estimation. Experiment results on two public datasets demonstrate that our system outperforms all state-of-the-art LIO systems in terms of smaller absolute trajectory error (ATE), and embedding a wheel encoder can greatly improve the performance of LIO based on the BA framework.
SR-LIO: LiDAR-Inertial Odometry with Sweep Reconstruction
Yuan, Zikang, Lang, Fengtian, Xu, Tianle, Yang, Xin
This paper proposes a novel LiDAR-Inertial odometry (LIO), named SR-LIO, based on an iterated extended Kalman filter (iEKF) framework. We adapt the sweep reconstruction method, which segments and reconstructs raw input sweeps from spinning LiDAR to obtain reconstructed sweeps with higher frequency. We found that such method can effectively reduce the time interval for each iterated state update, improving the state estimation accuracy and enabling the usage of iEKF framework for fusing high-frequency IMU and low-frequency LiDAR. To prevent inaccurate trajectory caused by multiple distortion correction to a particular point, we further propose to perform distortion correction for each segment. Experimental results on four public datasets demonstrate that our SR-LIO outperforms all existing state-of-the-art methods on accuracy, and reducing the time interval of iterated state update via the proposed sweep reconstruction can improve the accuracy and frequency of estimated states. The source code of SR-LIO is publicly available for the development of the community.