Thermodynamics of Interpretation
Mehdi, Shams, Tiwary, Pratyush
–arXiv.org Artificial Intelligence
Over the past few years, different types of data-driven Artificial Intelligence (AI) techniques have been widely adopted in various domains of science for generating predictive models. However, because of their black-box nature, it is crucial to establish trust in these models before accepting them as accurate. One way of achieving this goal is through the implementation of a post-hoc interpretation scheme that can put forward the reasons behind a black-box model's prediction. In this work, we propose a classical thermodynamics inspired approach for this purpose: Thermodynamically Explainable Representations of AI and other black-box Paradigms (TERP). TERP works by constructing a linear, local surrogate model that approximates the behaviour of the black-box model within a small neighborhood around the instance being explained. By employing a simple forward feature selection algorithm, TERP assigns an interpretability score to all the possible surrogate models. Compared to existing methods, TERP improves interpretability by selecting an optimal interpretation from these models by drawing simple parallels with classical thermodynamics. To validate TERP as a generally applicable method, we successfully demonstrate how it can be used to obtain interpretations of a wide range of black-box model architectures including deep learning Autoencoders, Recurrent neural networks and Convolutional neural networks applied to different domains including molecular simulations, image, and text classification respectively.
arXiv.org Artificial Intelligence
Mar-3-2023
- Country:
- Europe
- Greece > West Greece
- Patra (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Greece > West Greece
- North America > United States
- Maryland > Prince George's County > College Park (0.04)
- Europe
- Genre:
- Research Report (0.64)
- Industry:
- Technology: