Fare: Failure Resilience in Learned Visual Navigation Control
Wang, Zishuo, Loo, Joel, Hsu, David
–arXiv.org Artificial Intelligence
Abstract-- While imitation learning (IL) enables effective visual navigation, IL policies are prone to unpredictable failures in out-of-distribution (OOD) scenarios. We advance the notion of failure-resilient policies, which not only detect failures but also recover from them automatically. F ailure recognition that identifies the factors causing failure is key to informing recovery: e.g. We present F are, a framework to construct failure-resilient IL policies, embedding OOD-detection and recognition in them without using explicit failure data, and pairing them with recovery heuristics. Real-world experiments show that F are enables failure recovery across two different policy architectures, enabling robust long-range navigation in complex environments. Visual navigation is an attractive approach to robot navigation, leveraging rich visual information from low-cost sensors [1]. Imitation learning (IL) has emerged as a key method to learn visual navigation policies [2]-[4], but is inherently limited by training data. IL policies may fail unpredictably on inputs outside the training distribution, often without clear explanation [5]-[7]. This work develops a mechanism to enable IL policies to detect and recover from failures, supporting robust open-world navigation.
arXiv.org Artificial Intelligence
Oct-29-2025