Interpretable by Design: Wrapper Boxes Combine Neural Performance with Faithful Explanations
Su, Yiheng, Li, Juni Jessy, Lease, Matthew
–arXiv.org Artificial Intelligence
Can we preserve the accuracy of neural models while also providing faithful explanations? We present wrapper boxes, a general approach to generate faithful, example-based explanations for model predictions while maintaining predictive performance. After training a neural model as usual, its learned feature representation is input to a classic, interpretable model to perform the actual prediction. This simple strategy is surprisingly effective, with results largely comparable to those of the original neural model, as shown across three large pre-trained language models, two datasets of varying scale, four classic models, and four evaluation metrics. Moreover, because these classic models are interpretable by design, the subset of training examples that determine classic model predictions can be shown directly to users.
arXiv.org Artificial Intelligence
Nov-14-2023
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- North America
- Dominican Republic (0.04)
- United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.05)
- Texas > Travis County
- Austin (0.04)
- Minnesota > Hennepin County
- Asia > Middle East
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (0.46)
- Research Report
- Industry:
- Health & Medicine (0.93)
- Technology: