ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping

Neural Information Processing Systems 

This limits the interpretation and generates a deterministic output at test time. In this paper we aim to improve on the current feature attribution methods by developing a more interpretable model, and thus more meaningful feature attribution maps.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found