Post-hoc Interpretability for Neural NLP: A Survey
Madsen, Andreas, Reddy, Siva, Chandar, Sarath
–arXiv.org Artificial Intelligence
Neural networks for NLP are becoming increasingly complex and widespread, and there is a growing concern if these models are responsible to use. Explaining models helps to address the safety and ethical concerns and is essential for accountability. Interpretability serves to provide these explanations in terms that are understandable to humans. Additionally, post-hoc methods provide explanations after a model is learned and are generally model-agnostic. This survey provides a categorization of how recent post-hoc interpretability methods communicate explanations to humans, it discusses each method in-depth, and how they are validated, as the latter is often a common concern.
arXiv.org Artificial Intelligence
Nov-28-2023
- Country:
- Asia
- China (0.04)
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.04)
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.04)
- Quebec > Montreal (0.14)
- British Columbia > Metro Vancouver Regional District
- United States
- California > San Francisco County
- San Francisco (0.14)
- Illinois > Cook County
- Chicago (0.04)
- New York > New York County
- New York City (0.04)
- California > San Francisco County
- Canada
- South America
- Chile > Santiago Metropolitan Region
- Santiago Province > Santiago (0.04)
- Uruguay > Maldonado
- Maldonado (0.04)
- Chile > Santiago Metropolitan Region
- Asia
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.67)
- Industry:
- Law (1.00)
- Leisure & Entertainment (1.00)
- Technology: