TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
Bento, João, Saleiro, Pedro, Cruz, André F., Figueiredo, Mário A. T., Bizarro, Pedro
–arXiv.org Artificial Intelligence
Recurrent neural networks are a standard building block in numerous machine learning domains, from natural language processing to time-series classification. While their application has grown ubiquitous, understanding of their inner workings is still lacking. In practice, the complex decision-making in these models is seen as a black-box, creating a tension between accuracy and interpretability. Moreover, the ability to understand the reasoning process of a model is important in order to debug it and, even more so, to build trust in its decisions. Although considerable research effort has been guided towards explaining black-box models in recent years, recurrent models have received relatively little attention. Any method that aims to explain decisions from a sequence of instances should assess, not only feature importance, but also event importance, an ability that is missing from state-of-the-art explainers. In this work, we contribute to filling these gaps by presenting TimeSHAP, a model-agnostic recurrent explainer that leverages KernelSHAP's sound theoretical footing and strong empirical results. As the input sequence may be arbitrarily long, we further propose a pruning method that is shown to dramatically improve its efficiency in practice.
arXiv.org Artificial Intelligence
Nov-30-2020
- Country:
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.14)
- Minnesota > Hennepin County
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.67)
- Information Technology > Security & Privacy (1.00)
- Technology: