Goto

Collaborating Authors

 feature track



Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Fygenson, Racquel, Jawad, Kazi, Li, Isabel, Ayoub, Francois, Deen, Robert G., Davidoff, Scott, Moritz, Dominik, Hess-Flores, Mauricio

arXiv.org Artificial Intelligence

This is the author's version of the article that has been published in the proceedings of IEEE Visualization conference. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/ This metric also provides no visibility into how particular Reconstruction of 3D scenes from 2D images is a technical challenge images, lighting conditions, camera positions, or details of the that impacts domains from Earth and planetary sciences and morphology of the remote environment might interact to create inaccuracies space exploration to augmented and virtual reality. The impact of these unknowns algorithms first identify common features across images compounds in domains where high accuracy terrain reconstruction and then minimize reconstruction errors after estimating the is critical to outcomes, like science or space exploration where there shape of the terrain. This bundle adjustment (BA) step optimizes is no ground truth and inaccurate reconstruction can lead to false around a single, simplifying scalar value that obfuscates many possible results or risking billion-dollar spacecraft.


Exploring Event Camera-based Odometry for Planetary Robots

Mahlknecht, Florian, Gehrig, Daniel, Nash, Jeremy, Rockenbauer, Friedrich M., Morrell, Benjamin, Delaune, Jeff, Scaramuzza, Davide

arXiv.org Artificial Intelligence

Due to their resilience to motion blur and high robustness in low-light and high dynamic range conditions, event cameras are poised to become enabling sensors for vision-based exploration on future Mars helicopter missions. However, existing event-based visual-inertial odometry (VIO) algorithms either suffer from high tracking errors or are brittle, since they cannot cope with significant depth uncertainties caused by an unforeseen loss of tracking or other effects. In this work, we introduce EKLT-VIO, which addresses both limitations by combining a state-of-the-art event-based frontend with a filter-based backend. This makes it both accurate and robust to uncertainties, outperforming event- and frame-based VIO algorithms on challenging benchmarks by 32%. In addition, we demonstrate accurate performance in hover-like conditions (outperforming existing event-based methods) as well as high robustness in newly collected Mars-like and high-dynamic-range sequences, where existing frame-based methods fail. In doing so, we show that event-based VIO is the way forward for vision-based exploration on Mars.