RLCNet: An end-to-end deep learning framework for simultaneous online calibration of LiDAR, RADAR, and Camera
Cholakkal, Hafeez Husain, Arrigoni, Stefano, Braghin, Francesco
–arXiv.org Artificial Intelligence
UTONOMOUS vehicles are poised to revolutionize transportation by improving road safety, reducing traffic congestion, and increasing mobility convenience [1]. To perceive and interact with their environment accurately, these vehicles rely on a combination of complementary sensors, including LiDAR, RADAR, and cameras. Each sensor offers unique advantages: cameras capture rich visual detail, LiDAR provides precise 3D spatial measurements, and RADAR performs robustly under adverse weather conditions [2]. Sensor fusion leverages the strengths of these modalities to ensure redundancy and resilience, allowing the vehicle to maintain accurate perception in diverse and dynamic environments [3]. A critical component of sensor fusion is extrinsic calibration, which involves the determination of the relative positions and orientations of sensors in a common coordinate frame. However, maintaining precise calibration over time is a persistent challenge. Factors such as mechanical vibrations, temperature changes, and minor collisions can lead to sensor drift, where even small misalignments in sensor orientation or position can result in substantial perception errors, potentially compromising vehicle safety.
arXiv.org Artificial Intelligence
Dec-10-2025
- Country:
- Europe
- Italy > Lombardy
- Milan (0.04)
- Netherlands > South Holland
- Delft (0.04)
- Italy > Lombardy
- Europe
- Genre:
- Research Report (0.82)
- Industry:
- Transportation > Ground > Road (0.66)
- Technology: