Self-Interpretable Model with Transformation Equivariant Interpretation
–Neural Information Processing Systems
With the proliferation of machine learning applications in the real world, the demand for explaining machine learning predictions continues to grow especially in high-stakes fields. Recent studies have found that interpretation methods can be sensitive and unreliable, where the interpretations can be disturbed by perturbations or transformations of input data. To address this issue, we propose to learn robust interpretations through transformation equivariant regularization in a self-interpretable model.
Neural Information Processing Systems
Oct-2-2025, 11:21:57 GMT
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Indiana > Tippecanoe County
- Lafayette (0.04)
- West Lafayette (0.04)
- Indiana > Tippecanoe County
- Asia > Middle East
- Genre:
- Research Report (0.88)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.47)
- Statistical Learning (0.68)
- Natural Language (1.00)
- Vision (0.95)
- Machine Learning
- Data Science > Data Mining (0.93)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology