Learning to Navigate Under Imperfect Perception: Conformalised Segmentation for Safe Reinforcement Learning
Bethell, Daniel, Gerasimou, Simos, Calinescu, Radu, Imrie, Calum
–arXiv.org Artificial Intelligence
Reliable navigation in safety-critical environments requires both accurate hazard perception and principled uncertainty handling to strengthen downstream safety handling. Despite the effectiveness of existing approaches, they assume perfect hazard detection capabilities, while uncertainty-aware perception approaches lack finite-sample guarantees. We present COPPOL, a conformal-driven perception-to-policy learning approach that integrates distribution-free, finite-sample safety guarantees into semantic segmentation, yielding calibrated hazard maps with rigorous bounds for missed detections. These maps induce risk-aware cost fields for downstream RL planning. Across two satellite-derived benchmarks, COPPOL increases hazard coverage (up to 6x) compared to comparative baselines, achieving near-complete detection of unsafe regions while reducing hazardous violations during navigation (up to approx 50%). More importantly, our approach remains robust to distributional shift, preserving both safety and efficiency.
arXiv.org Artificial Intelligence
Oct-22-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > North Yorkshire > York (0.41)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Technology: