Risk-Averse Certification of Bayesian Neural Networks
Zhang, Xiyue, Wang, Zifan, Gao, Yulong, Romao, Licio, Abate, Alessandro, Kwiatkowska, Marta
–arXiv.org Artificial Intelligence
In light of the inherently complex and dynamic nature of real-world environments, incorporating risk measures is crucial for the robustness evaluation of deep learning models. In this work, we propose a Risk-Averse Certification framework for Bayesian neural networks called RAC-BNN. Our method leverages sampling and optimisation to compute a sound approximation of the output set of a BNN, represented using a set of template polytopes. To enhance robustness evaluation, we integrate a coherent distortion risk measure--Conditional Value at Risk (CVaR)--into the certification framework, providing probabilistic guarantees based on empirical distributions obtained through sampling. We validate RAC-BNN on a range of regression and classification benchmarks and compare its performance with a state-of-the-art method. The results show that RAC-BNN effectively quantifies robustness under worst-performing risky scenarios, and achieves tighter certified bounds and higher efficiency in complex tasks.
arXiv.org Artificial Intelligence
Nov-29-2024
- Country:
- Europe (0.46)
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Health & Medicine (0.46)
- Information Technology (0.68)
- Technology: