attribution-based confidence metric
Attribution-Based Confidence Metric For Deep Neural Networks
We propose a novel confidence metric, namely, attribution-based confidence (ABC) for deep neural networks (DNNs). ABC metric characterizes whether the output of a DNN on an input can be trusted. DNNs are known to be brittle on inputs outside the training distribution and are, hence, susceptible to adversarial attacks. This fragility is compounded by a lack of effectively computable measures of model confidence that correlate well with the accuracy of DNNs. These factors have impeded the adoption of DNNs in high-assurance systems.
Attribution-Based Confidence Metric For Deep Neural Networks
We propose a novel confidence metric, namely, attribution-based confidence (ABC) for deep neural networks (DNNs). ABC metric characterizes whether the output of a DNN on an input can be trusted. DNNs are known to be brittle on inputs outside the training distribution and are, hence, susceptible to adversarial attacks. This fragility is compounded by a lack of effectively computable measures of model confidence that correlate well with the accuracy of DNNs. These factors have impeded the adoption of DNNs in high-assurance systems.
Reviews: Attribution-Based Confidence Metric For Deep Neural Networks
Overall Comments This paper is reasonably well motivated and provide justifications for the key use of integrated gradients as part of the computing the confidence score. The paper also presents several empirical demonstrations of the algorithm. The key motivation is that one might want to compute calibration scores without retraining like is typical for isotonic regression and platt scaling. Originality I am not aware of work using integrated gradients for computing calibration scores. However, the literature on interpretability and uncertainty representation is vast.
Reviews: Attribution-Based Confidence Metric For Deep Neural Networks
This paper is well-motivated, written clearly, and provides theoretical and empirical evidence for the utility of integrated gradients for computing confidence scores of neural networks. The ideas presented are novel and are backed up quite well with theory and experiments. Few suggestions for improvement in the final version of the paper: 1. simple demo against Platt scaling 2. clarification of the sparseness of IG attribution maps 3. a more detailed qualitative error analysis of confidence metric All in all, this is a good contribution and I recommend its acceptance.
Attribution-Based Confidence Metric For Deep Neural Networks
We propose a novel confidence metric, namely, attribution-based confidence (ABC) for deep neural networks (DNNs). ABC metric characterizes whether the output of a DNN on an input can be trusted. DNNs are known to be brittle on inputs outside the training distribution and are, hence, susceptible to adversarial attacks. This fragility is compounded by a lack of effectively computable measures of model confidence that correlate well with the accuracy of DNNs. These factors have impeded the adoption of DNNs in high-assurance systems.
Attribution-Based Confidence Metric For Deep Neural Networks
Jha, Susmit, Raj, Sunny, Fernandes, Steven, Jha, Sumit K., Jha, Somesh, Jalaian, Brian, Verma, Gunjan, Swami, Ananthram
We propose a novel confidence metric, namely, attribution-based confidence (ABC) for deep neural networks (DNNs). ABC metric characterizes whether the output of a DNN on an input can be trusted. DNNs are known to be brittle on inputs outside the training distribution and are, hence, susceptible to adversarial attacks. This fragility is compounded by a lack of effectively computable measures of model confidence that correlate well with the accuracy of DNNs. These factors have impeded the adoption of DNNs in high-assurance systems.