Exploring the best way for UAV visual localization under Low-altitude Multi-view Observation Condition: a Benchmark
Ye, Yibin, Teng, Xichao, Chen, Shuo, Li, Zhang, Liu, Leqi, Yu, Qifeng, Tan, Tao
–arXiv.org Artificial Intelligence
Absolute Visual Localization (AVL) enables Unmanned Aerial Vehicle (UAV) to determine its position in GNSS-denied environments by establishing geometric relationships between UAV images and geo-tagged reference maps. While many previous works have achieved AVL with image retrieval and matching techniques, research in low-altitude multi-view scenarios still remains limited. Low-altitude Multi-view condition presents greater challenges due to extreme viewpoint changes. To explore the best UAV AVL approach in such condition, we proposed this benchmark. Firstly, a large-scale Low-altitude Multi-view dataset called AnyVisLoc was constructed. This dataset includes 18,000 images captured at multiple scenes and altitudes, along with 2.5D reference maps containing aerial photogrammetry maps and historical satellite maps. Secondly, a unified framework was proposed to integrate the state-of-the-art AVL approaches and comprehensively test their performance. The best combined method was chosen as the baseline and the key factors that influencing localization accuracy are thoroughly analyzed based on it. This baseline achieved a 74.1% localization accuracy within 5m under Low-altitude, Multi-view conditions. In addition, a novel retrieval metric called PDM@K was introduced to better align with the characteristics of the UAV AVL task. Overall, this benchmark revealed the challenges of Low-altitude, Multi-view UAV AVL and provided valuable guidance for future research. The dataset and codes are available at https://github.com/UAV-AVL/Benchmark
arXiv.org Artificial Intelligence
Mar-11-2025
- Country:
- Asia (0.28)
- North America > United States (0.46)
- Genre:
- Research Report (0.50)
- Industry:
- Aerospace & Defense > Aircraft (0.34)
- Information Technology > Robotics & Automation (0.34)
- Technology: