Deep Event Visual Odometry
Klenk, Simon, Motzet, Marvin, Koestler, Lukas, Cremers, Daniel
–arXiv.org Artificial Intelligence
Event cameras offer the exciting possibility of tracking the camera's pose during high-speed motion and in adverse lighting conditions. Despite this promise, existing event-based monocular visual odometry (VO) approaches demonstrate limited performance on recent benchmarks. To address this limitation, some methods resort to additional sensors such as IMUs, stereo event cameras, or frame-based cameras. Nonetheless, these additional sensors limit the application of event cameras in real-world devices since they increase cost and complicate system requirements. Moreover, relying on a frame-based camera makes the system susceptible to motion blur and HDR. To remove the dependency on additional sensors and to push the limits of using only a single event camera, we present Deep Event VO (DEVO), the first monocular event-only system with strong performance on a large number of real-world benchmarks. DEVO sparsely tracks selected event patches over time. A key component of DEVO is a novel deep patch selection mechanism tailored to event data. We significantly decrease the pose tracking error on seven real-world benchmarks by up to 97% compared to event-only methods and often surpass or are close to stereo or inertial methods. Code is available at https://github.com/tum-vision/DEVO
arXiv.org Artificial Intelligence
Dec-15-2023
- Country:
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Germany > Bavaria
- Europe
- Genre:
- Research Report (0.50)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.46)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision (0.71)
- Information Technology > Artificial Intelligence