Get Global Guarantees: On the Probabilistic Nature of Perturbation Robustness
–arXiv.org Artificial Intelligence
In safety-critical deep learning applications, robustness measures the ability of neural models that handle imperceptible perturbations in input data, which may lead to potential safety hazards. Existing pre-deployment robustness assessment methods typically suffer from significant trade-offs between computational cost and measurement precision, limiting their practical utility. To address these limitations, this paper conducts a comprehensive comparative analysis of existing robustness definitions and associated assessment methodologies. We propose tower robustness to evaluate robustness, which is a novel, practical metric based on hypothesis testing to quantitatively evaluate probabilistic robustness, enabling more rigorous and efficient pre-deployment assessments. Our extensive comparative evaluation illustrates the advantages and applicability of our proposed approach, thereby advancing the systematic understanding and enhancement of model robustness in safety-critical deep learning applications.
arXiv.org Artificial Intelligence
Aug-27-2025
- Country:
- Asia
- Singapore (0.77)
- South Korea > Seoul
- Seoul (0.05)
- Europe
- Austria (0.04)
- France (0.04)
- Luxembourg > Luxembourg Canton
- Luxembourg City (0.04)
- Portugal > Porto
- Porto (0.04)
- Switzerland (0.04)
- North America
- Canada > British Columbia
- United States
- California > San Francisco County
- San Francisco (0.14)
- New York
- Bronx County > New York City (0.04)
- Kings County > New York City (0.04)
- New York County > New York City (0.14)
- Queens County > New York City (0.04)
- Richmond County > New York City (0.04)
- California > San Francisco County
- Asia
- Genre:
- Research Report
- Experimental Study (0.46)
- New Finding (0.68)
- Research Report
- Industry:
- Education (0.67)
- Government (0.46)
- Information Technology > Security & Privacy (0.68)
- Technology: