NERO: Explainable Out-of-Distribution Detection with Neuron-level Relevance
Chhetri, Anju, Korhonen, Jari, Gyawali, Prashnna, Bhattarai, Binod
–arXiv.org Artificial Intelligence
Ensuring reliability is paramount in deep learning, particularly within the domain of medical imaging, where diagnostic decisions often hinge on model outputs. The capacity to separate out-of-distribution (OOD) samples has proven to be a valuable indicator of a model's reliability in research. In medical imaging, this is especially critical, as identifying OOD inputs can help flag potential anomalies that might otherwise go undetected. While many OOD detection methods rely on feature or logit space representations, recent works suggest these approaches may not fully capture OOD diversity. To address this, we propose a novel OOD scoring mechanism, called NERO, that leverages neuron-level relevance at the feature layer. Specifically, we cluster neuron-level relevance for each in-distribution (ID) class to form representative centroids and introduce a relevance distance metric to quantify a new sample's deviation from these centroids, enhancing OOD separability. Additionally, we refine performance by incorporating scaled relevance in the bias term and combining feature norms. Our framework also enables explainable OOD detection.
arXiv.org Artificial Intelligence
Sep-25-2025
- Country:
- Asia > Nepal (0.04)
- North America > United States
- West Virginia (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.88)
- Therapeutic Area > Gastroenterology (0.94)
- Health & Medicine
- Technology: