Human-grounded Evaluations of Explanation Methods for Text Classification
Lertvittayakumjorn, Piyawat, Toni, Francesca
–arXiv.org Artificial Intelligence
For text classification in particular, most of the existing explanation methods identify parts of the input text which contribute most towards the predicted class (so called attribution methods or relevance methods) by exploiting various techniques such as input perturbation (Li et al., 2016), gradient analysis (Dimopoulos et al., 1995), and relevance propagation (Arras et al., 2017b). Besides, there are other explanation methods designed for specific deep learning architectures such as attention mechanism (Ghaeini et al., 2018) and extrac-tive rationale generation (Lei et al., 2016). We select some well-known explanation methods (which are applicable to CNNs for text classification) and evaluate them together with two new explanation methods proposed in this paper.
arXiv.org Artificial Intelligence
Aug-29-2019
- Country:
- Europe > Belgium (0.14)
- North America > United States (0.14)
- Oceania > Australia (0.14)
- Genre:
- Research Report (0.82)
- Industry:
- Leisure & Entertainment (0.46)
- Technology: