Dynamic Multi-Target Fusion for Efficient Audio-Visual Navigation
Yu, Yinfeng, Zhang, Hailong, Zhu, Meiling
–arXiv.org Artificial Intelligence
Audiovisual embodied navigation enables robots to locate audio sources by dynamically integrating visual observations from onboard sensors with the auditory signals emitted by the target. The core challenge lies in effectively leveraging multimodal cues to guide navigation. While prior works have explored basic fusion of visual and audio data, they often overlook deeper perceptual context. To address this, we propose the Dynamic Multi-Target Fusion for Efficient Audio-Visual Navigation (DMTF-A VN). Our approach uses a multi-target architecture coupled with a refined Transformer mechanism to filter and selectively fuse cross-modal information. Extensive experiments on the Replica and Matterport3D datasets demonstrate that DMTF-A VN achieves state-of-the-art performance, outperforming existing methods in success rate (SR), path efficiency (SPL), and scene adaptation (SNA). Furthermore, the model exhibits strong scalability and generalizability, paving the way for advanced multimodal fusion strategies in robotic navigation.
arXiv.org Artificial Intelligence
Sep-29-2025
- Country:
- Asia
- China > Xinjiang Uygur Autonomous Region (0.04)
- Macao (0.04)
- Europe
- Austria > Vienna (0.14)
- Netherlands > North Holland
- Amsterdam (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning > Agents (0.68)
- Vision (0.96)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence