Goto

Collaborating Authors

 prood-disc



Figure 2 Left Asymptotic confidence We plot the mean confidence in the predicted in distribution class for different models as one moves away from samples along the trajectories

Neural Information Processing Systems

We attempted to find such directions by running the following type of attack. In Figure 2 GOOD also stands out as having low confidence in all directions that we studied. However, there is no guarantee that GOOD does not also get in some direction asymptotically overconfident. Each convolu-tional layer is directly followed by a ReLU. In Section 2.1 we describe semi-joint training of We see that the AUCs of ProoD-S are almost identical to those of OE.


Provably Robust Detection of Out-of-distribution Data (almost) for free

Meinke, Alexander, Bitterwolf, Julian, Hein, Matthias

arXiv.org Artificial Intelligence

When applying machine learning in safety-critical systems, a reliable assessment of the uncertainy of a classifier is required. However, deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data and even if trained to be non-confident on OOD data one can still adversarially manipulate OOD data so that the classifer again assigns high confidence to the manipulated samples. In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier. In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data. Moreover, due to the particular construction our classifier provably avoids the asymptotic overconfidence problem of standard neural networks.