QueensCAMP: an RGB-D dataset for robust Visual SLAM
Bruno, Hudson M. S., Colombini, Esther L., Givigi, Sidney N. Jr
–arXiv.org Artificial Intelligence
Visual Simultaneous Localization and Mapping (VSLAM) is a fundamental technology for robotics applications. While VSLAM research has achieved significant advancements, its robustness under challenging situations, such as poor lighting, dynamic environments, motion blur, and sensor failures, remains a challenging issue. To address these challenges, we introduce a novel RGB-D dataset designed for evaluating the robustness of VSLAM systems. The dataset comprises real-world indoor scenes with dynamic objects, motion blur, and varying illumination, as well as emulated camera failures, including lens dirt, condensation, underexposure, and overexposure. Additionally, we offer open-source scripts for injecting camera failures into any images, enabling further customization by the research community. Our experiments demonstrate that ORB-SLAM2, a traditional VSLAM algorithm, and TartanVO, a Deep Learning-based VO algorithm, can experience performance degradation under these challenging conditions. Therefore, this dataset and the camera failure open-source tools provide a valuable resource for developing more robust VSLAM systems capable of handling real-world challenges.
arXiv.org Artificial Intelligence
Oct-16-2024
- Country:
- North America
- Canada > Ontario
- Kingston (0.04)
- United States > Michigan (0.04)
- Canada > Ontario
- South America > Brazil
- North America
- Genre:
- Research Report (0.82)
- Technology: