Distributionally Robust Statistical Verification with Imprecise Neural Networks
Dutta, Souradeep, Caprio, Michele, Lin, Vivian, Cleaveland, Matthew, Jang, Kuk Jin, Ruchkin, Ivan, Sokolsky, Oleg, Lee, Insup
–arXiv.org Artificial Intelligence
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems. Verification approaches centered around reachability analysis fail to scale, and purely statistical approaches are constrained by the distributional assumptions about the sampling process. Instead, we pose a distributionally robust version of the statistical verification problem for black-box systems, where our performance guarantees hold over a large family of distributions. This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification. A central piece of our approach is an ensemble technique called Imprecise Neural Networks, which provides the uncertainty to guide active learning. The active learning uses an exhaustive neural-network verification tool Sherlock to collect samples. An evaluation on multiple physical simulators in the openAI gym Mujoco environments with reinforcement-learned controllers demonstrates that our approach can provide useful and scalable guarantees for high-dimensional systems.
arXiv.org Artificial Intelligence
Dec-11-2023
- Country:
- Europe (0.67)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology (0.46)
- Transportation > Air (0.34)
- Technology: