Probabilistic Robustness in Deep Learning: A Concise yet Comprehensive Guide
–arXiv.org Artificial Intelligence
Deep learning (DL) has demonstrated significant potential across various safety-critical applications, yet ensuring its robustness remains a key challenge. While adversarial robustness has been extensively studied in worst-case scenarios, probabilistic robustness (PR) offers a more practical perspective by quantifying the likelihood of failures under stochastic perturbations. This paper provides a concise yet comprehensive overview of PR, covering its formal definitions, evaluation and enhancement methods. We introduce a reformulated ''min-max'' optimisation framework for adversarial training specifically designed to improve PR. Furthermore, we explore the integration of PR verification evidence into system-level safety assurance, addressing challenges in translating DL model-level robustness to system-level claims. Finally, we highlight open research questions, including benchmarking PR evaluation methods, extending PR to generative AI tasks, and developing rigorous methodologies and case studies for system-level integration.
arXiv.org Artificial Intelligence
Mar-8-2025
- Country:
- Europe > United Kingdom (0.14)
- Genre:
- Overview (0.88)
- Research Report (1.00)
- Industry:
- Information Technology (0.47)
- Technology: