Self-Interpretable Model with Transformation Equivariant Interpretation

Neural Information Processing Systems 

With the proliferation of machine learning applications in the real world, the demand for explaining machine learning predictions continues to grow especially in high-stakes fields. Recent studies have found that interpretation methods can be sensitive and unreliable, where the interpretations can be disturbed by perturbations or transformations of input data. To address this issue, we propose to learn robust interpretations through transformation equivariant regularization in a self-interpretable model.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found