Robustness of Bayesian Neural Networks to Gradient-Based Attacks

Neural Information Processing Systems 

Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open.