Goto

Collaborating Authors

 spatiotemporal calibration


Spatiotemporal Calibration and Ground Truth Estimation for High-Precision SLAM Benchmarking in Extended Reality

Shu, Zichao, Bei, Shitao, Li, Lijun, Chen, Zetao

arXiv.org Artificial Intelligence

Simultaneous localization and mapping (SLAM) plays a fundamental role in extended reality (XR) applications. As the standards for immersion in XR continue to increase, the demands for SLAM benchmarking have become more stringent. Trajectory accuracy is the key metric, and marker-based optical motion capture (MoCap) systems are widely used to generate ground truth (GT) because of their drift-free and relatively accurate measurements. However, the precision of MoCap-based GT is limited by two factors: the spatiotemporal calibration with the device under test (DUT) and the inherent jitter in the MoCap measurements. These limitations hinder accurate SLAM benchmarking, particularly for key metrics like rotation error and inter-frame jitter, which are critical for immersive XR experiences. This paper presents a novel continuous-time maximum likelihood estimator to address these challenges. The proposed method integrates auxiliary inertial measurement unit (IMU) data to compensate for MoCap jitter. Additionally, a variable time synchronization method and a pose residual based on screw congruence constraints are proposed, enabling precise spatiotemporal calibration across multiple sensors and the DUT. Experimental results demonstrate that our approach outperforms existing methods, achieving the precision necessary for comprehensive benchmarking of state-of-the-art SLAM algorithms in XR applications. Furthermore, we thoroughly validate the practicality of our method by benchmarking several leading XR devices and open-source SLAM algorithms. The code is publicly available at https://github.com/ylab-xrpg/xr-hpgt.


Spatiotemporal Calibration of Doppler Velocity Logs for Underwater Robots

Zhao, Hongxu, Zeng, Guangyang, Shao, Yunling, Zhang, Tengfei, Wu, Junfeng

arXiv.org Artificial Intelligence

Acoustic sensors, particularly Doppler V elocity Logs (DVLs), have become indispensable for underwater navigation and environmental sensing. To enable robust fusion of DVL measurements with data from other sensors, precise calibration of extrinsic parameters and temporal synchronization is critical, especially in challenging underwater operating conditions [1]-[5]. Prior work by Xu et al. [6] and Westman and Kaes [7] framed the DVL-camera calibration as an odometry alignment problem, matching the trajectory from a DVL-IMU system against the visual one from a camera. A critical limitation of these approaches is their implicit assumption of known and static DVL-IMU extrinsics, which is frequently violated in underwater environments due to their dynamic nature. While studies in [8]-[11] address the calibration of IMU-free DVLs, their applicability is strictly limited to co-sensors that provide direct linear and angular velocity measurements, such as SINS/GPS systems. Crucially, a significant gap persists across all these works: none address the calibration of translational extrinsic nor account for temporal synchronization across heterogeneous sensors.


eKalibr-Stereo: Continuous-Time Spatiotemporal Calibration for Event-Based Stereo Visual Systems

Chen, Shuolong, Li, Xingxing, Yuan, Liu

arXiv.org Artificial Intelligence

The bioinspired event camera, distinguished by its exceptional temporal resolution, high dynamic range, and low power consumption, has been extensively studied in recent years for motion estimation, robotic perception, and object detection. In ego-motion estimation, the stereo event camera setup is commonly adopted due to its direct scale perception and depth recovery. For optimal stereo visual fusion, accurate spatiotemporal (extrinsic and temporal) calibration is required. Considering that few stereo visual calibrators orienting to event cameras exist, based on our previous work eKalibr (an event camera intrinsic calibrator), we propose eKalibr-Stereo for accurate spatiotemporal calibration of event-based stereo visual systems. To improve the continuity of grid pattern tracking, building upon the grid pattern recognition method in eKalibr, an additional motion prior-based tracking module is designed in eKalibr-Stereo to track incomplete grid patterns. Based on tracked grid patterns, a two-step initialization procedure is performed to recover initial guesses of piece-wise B-splines and spatiotemporal parameters, followed by a continuous-time batch bundle adjustment to refine the initialized states to optimal ones. The results of extensive real-world experiments show that eKalibr-Stereo can achieve accurate event-based stereo spatiotemporal calibration. The implementation of eKalibr-Stereo is open-sourced at (https://github.com/Unsigned-Long/eKalibr) to benefit the research community.


MoCap2GT: A High-Precision Ground Truth Estimator for SLAM Benchmarking Based on Motion Capture and IMU Fusion

Shu, Zichao, Bei, Shitao, Dai, Jicheng, Li, Lijun, Chen, Zetao

arXiv.org Artificial Intelligence

Marker-based optical motion capture (MoCap) systems are widely used to provide ground truth (GT) trajectories for benchmarking SLAM algorithms. However, the accuracy of MoCap-based GT trajectories is mainly affected by two factors: spatiotemporal calibration errors between the MoCap system and the device under test (DUT), and inherent MoCap jitter. Consequently, existing benchmarks focus primarily on absolute translation error, as accurate assessment of rotation and inter-frame errors remains challenging, hindering thorough SLAM evaluation. This paper proposes MoCap2GT, a joint optimization approach that integrates MoCap data and inertial measurement unit (IMU) measurements from the DUT for generating high-precision GT trajectories. MoCap2GT includes a robust state initializer to ensure global convergence, introduces a higher-order B-spline pose parameterization on the SE(3) manifold with variable time offset to effectively model MoCap factors, and employs a degeneracy-aware measurement rejection strategy to enhance estimation accuracy. Experimental results demonstrate that MoCap2GT outperforms existing methods and significantly contributes to precise SLAM benchmarking. The source code is available at https://anonymous.4open.science/r/mocap2gt (temporarily hosted anonymously for double-blind review).


iKalibr-RGBD: Partially-Specialized Target-Free Visual-Inertial Spatiotemporal Calibration For RGBDs via Continuous-Time Velocity Estimation

Chen, Shuolong, Li, Xingxing, Li, Shengyu, Zhou, Yuxuan

arXiv.org Artificial Intelligence

Visual-inertial systems have been widely studied and applied in the last two decades, mainly due to their low cost and power consumption, small footprint, and high availability. Such a trend simultaneously leads to a large amount of visual-inertial calibration methods being presented, as accurate spatiotemporal parameters between sensors are a prerequisite for visual-inertial fusion. In our previous work, i.e., iKalibr, a continuous-time-based visual-inertial calibration method was proposed as a part of one-shot multi-sensor resilient spatiotemporal calibration. While requiring no artificial target brings considerable convenience, computationally expensive pose estimation is demanded in initialization and batch optimization, limiting its availability. Fortunately, this could be vastly improved for the RGBDs with additional depth information, by employing mapping-free ego-velocity estimation instead of mapping-based pose estimation. In this paper, we present the continuous-time ego-velocity estimation-based RGBD-inertial spatiotemporal calibration, termed as iKalibr-RGBD, which is also targetless but computationally efficient. The general pipeline of iKalibr-RGBD is inherited from iKalibr, composed of a rigorous initialization procedure and several continuous-time batch optimizations. The implementation of iKalibr-RGBD is open-sourced at (https://github.com/Unsigned-Long/iKalibr) to benefit the research community.


RIs-Calib: An Open-Source Spatiotemporal Calibrator for Multiple 3D Radars and IMUs Based on Continuous-Time Estimation

Chen, Shuolong, Li, Xingxing, Li, Shengyu, Zhou, Yuxuan, Wang, Shiwen

arXiv.org Artificial Intelligence

Aided inertial navigation system (INS), typically consisting of an inertial measurement unit (IMU) and an exteroceptive sensor, has been widely accepted as a feasible solution for navigation. Compared with vision-aided and LiDAR-aided INS, radar-aided INS could achieve better performance in adverse weather conditions since the radar utilizes low-frequency measuring signals with less attenuation effect in atmospheric gases and rain. For such a radar-aided INS, accurate spatiotemporal transformation is a fundamental prerequisite to achieving optimal information fusion. In this work, we present RIs-Calib: a spatiotemporal calibrator for multiple 3D radars and IMUs based on continuous-time estimation, which enables accurate spatiotemporal calibration and does not require any additional artificial infrastructure or prior knowledge. Our approach starts with a rigorous and robust procedure for state initialization, followed by batch optimizations, where all parameters can be refined to global optimal states steadily. We validate and evaluate RIs-Calib on both simulated and real-world experiments, and the results demonstrate that RIs-Calib is capable of accurate and consistent calibration. We open-source our implementations at (https://github.com/Unsigned-Long/RIs-Calib) to benefit the research community.


iKalibr: Unified Targetless Spatiotemporal Calibration for Resilient Integrated Inertial Systems

Chen, Shuolong, Li, Xingxing, Li, Shengyu, Zhou, Yuxuan, Yang, Xiaoteng

arXiv.org Artificial Intelligence

The integrated inertial system, typically integrating an IMU and an exteroceptive sensor such as radar, LiDAR, and camera, has been widely accepted and applied in modern robotic applications for ego-motion estimation, motion control, or autonomous exploration. To improve system accuracy, robustness, and further usability, both multiple and various sensors are generally resiliently integrated, which benefits the system performance regarding failure tolerance, perception capability, and environment compatibility. For such systems, accurate and consistent spatiotemporal calibration is required to maintain a unique spatiotemporal framework for multi-sensor fusion. Considering most existing calibration methods (i) are generally oriented to specific integrated inertial systems, (ii) often only focus on spatial determination, (iii) usually require artificial targets, lacking convenience and usability, we propose iKalibr: a unified targetless spatiotemporal calibration framework for resilient integrated inertial systems, which overcomes the above issues, and enables both accurate and consistent calibration. Altogether four commonly employed sensors are supported in iKalibr currently, namely IMU, radar, LiDAR, and camera. The proposed method starts with a rigorous and efficient dynamic initialization, where all parameters in the estimator would be accurately recovered. Following that, several continuous-time-based batch optimizations would be carried out to refine initialized parameters to global optimal ones. Sufficient real-world experiments were conducted to verify the feasibility and evaluate the calibration performance of iKalibr. The results demonstrate that iKalibr can achieve accurate resilient spatiotemporal calibration. We open-source our implementations at (https://github.com/Unsigned-Long/iKalibr) to benefit the research community.


Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera Pairs

Wise, Emmett, Cheng, Qilong, Kelly, Jonathan

arXiv.org Artificial Intelligence

Autonomous vehicles (AVs) fuse data from multiple sensors and sensing modalities to impart a measure of robustness when operating in adverse conditions. Radars and cameras are popular choices for use in sensor fusion; although radar measurements are sparse in comparison to camera images, radar scans penetrate fog, rain, and snow. However, accurate sensor fusion depends upon knowledge of the spatial transform between the sensors and any temporal misalignment that exists in their measurement times. During the life cycle of an AV, these calibration parameters may change, so the ability to perform in-situ spatiotemporal calibration is essential to ensure reliable long-term operation. State-of-the-art 3D radar-camera spatiotemporal calibration algorithms require bespoke calibration targets that are not readily available in the field. In this paper, we describe an algorithm for targetless spatiotemporal calibration that does not require specialized infrastructure. Our approach leverages the ability of the radar unit to measure its own ego-velocity relative to a fixed, external reference frame. We analyze the identifiability of the spatiotemporal calibration problem and determine the motions necessary for calibration. Through a series of simulation studies, we characterize the sensitivity of our algorithm to measurement noise. Finally, we demonstrate accurate calibration for three real-world systems, including a handheld sensor rig and a vehicle-mounted sensor array. Our results show that we are able to match the performance of an existing, target-based method, while calibrating in arbitrary, infrastructure-free environments.