Active Illumination for Visual Ego-Motion Estimation in the Dark

Crocetti, Francesco, Dionigi, Alberto, Brilli, Raffaele, Costante, Gabriele, Valigi, Paolo

arXiv.org Artificial Intelligence 

In this paper, we propose a novel active illumination framework to enhance the performance of VO and V-SLAM algorithms in these challenging conditions. The developed approach dynamically controls a moving light source to illuminate highly textured areas, thereby improving feature extraction and tracking. Specifically, a detector block, which incorporates a deep learning-based enhancing network, identifies regions with relevant features. Then, a pan-tilt controller is responsible for guiding the light beam toward these areas, so that to provide information-rich images to the ego-motion estimation algorithm. Experimental results on a real robotic platform demonstrate the effectiveness of the proposed method, showing a reduction in the pose estimation error up to 75% with respect to a traditional fixed lighting technique. I. INTRODUCTION Vision-based pose estimation is one of the most widespread strategies to achieve mobile robot localization. Several effective Visual Odometry (VO) and Visual SLAM (V -SLAM) approaches have flourished in the last decades [1], and the recent emergence of visual-inertial techniques has shown even more impressive results [2], [3]. The effectiveness of VO and V -SLAM solutions depends on the capability to extract robust and highly-descriptive visual features.