Perturbation on Feature Coalition: Towards Interpretable Deep Neural Networks
Hu, Xuran, Zhu, Mingzhe, Feng, Zhenpeng, Daković, Miloš, Stanković, Ljubiša
–arXiv.org Artificial Intelligence
The inherent "black box" nature of deep neural networks (DNNs) compromises their transparency and reliability. Recently, explainable AI (XAI) has garnered increasing attention from researchers. Several perturbation-based interpretations have emerged. However, these methods often fail to adequately consider feature dependencies. To solve this problem, we introduce a perturbation-based interpretation guided by feature coalitions, which leverages deep information of network to extract correlated features. Then, we proposed a carefully-designed consistency loss to guide network interpretation. Both quantitative and qualitative experiments are conducted to validate the effectiveness of our proposed method. Code is available at github.com/Teriri1999/Perturebation-on-Feature-Coalition.
arXiv.org Artificial Intelligence
Aug-23-2024
- Country:
- Asia > China
- Shaanxi Province > Xi'an (0.05)
- Europe
- Montenegro > Podgorica
- Podgorica (0.05)
- Switzerland > Zürich
- Zürich (0.14)
- Montenegro > Podgorica
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Transportation (0.35)
- Technology: