Out-of-distribution Detection with Implicit Outlier Transformation
Wang, Qizhou, Ye, Junjie, Liu, Feng, Dai, Quanyu, Kalander, Marcus, Liu, Tongliang, Hao, Jianye, Han, Bo
–arXiv.org Artificial Intelligence
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection, enhancing detection capability via model fine-tuning with surrogate OOD data. Thus, the performance of OE, when facing unseen OOD data, can be weakened. To address this issue, we propose a novel OE-based approach that makes the model perform well for unseen OOD situations, even for unseen OOD cases. It leads to a min-max learning scheme--searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for uniform performance in OOD detection. In our realization, these worst OOD data are synthesized by transforming original surrogate ones. Specifically, the associated transform functions are learned implicitly based on our novel insight that model perturbation leads to data transformation. Our methodology offers an efficient way of synthesizing OOD data, which can further benefit the detection model, besides the surrogate OOD data. We conduct extensive experiments under various OOD detection setups, demonstrating the effectiveness of our method against its advanced counterparts. The code is publicly available at: github.com/qizhouwang/doe. Deep learning systems in the open world often encounter out-of-distribution (OOD) data whose label space is disjoint with that of the in-distribution (ID) samples. For many safety-critical applications, deep models should make reliable predictions for ID data, while OOD cases (Bulusu et al., 2020) should be reported as anomalies. It leads to the well-known OOD detection problem (Lee et al., 2018c; Fang et al., 2022), which has attracted intensive attention in reliable machine learning. OOD detection remains non-trivial since deep models can be over-confident when facing OOD data (Nguyen et al., 2015; Bendale & Boult, 2016), and many efforts have been made in pursuing reliable detection models (Yang et al., 2021; Salehi et al., 2021).
arXiv.org Artificial Intelligence
Mar-8-2023