Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Stammer, Wolfgang, Schramowski, Patrick, Kersting, Kristian
–arXiv.org Artificial Intelligence
These "visual" explanations are often insufficient, as the model's actual concept remains elusive. Moreover, without insights into the model's semantic concept, it is difficult --if not impossible-- to intervene on the model's behavior via its explanations, called Explanatory Interactive Learning. Consequently, we propose to intervene on a Neuro-Symbolic scene representation, which allows one to revise the model on the semantic level, e.g. "never focus on the color to make your decision". We compiled a novel confounded visual scene data set, the CLEVR-Hans data set, capturing complex compositions of different objects. The results of our experiments on CLEVR-Hans demonstrate that our semantic explanations, i.e. Figure 1: Neuro-Symbolic explanations are needed to revise compositional explanations at a per-object level, can identify deep learning models from focusing on irrelevant features confounders that are not identifiable using "visual" explanations via global feedback rules.
arXiv.org Artificial Intelligence
Dec-14-2020
- Country:
- Europe (1.00)
- North America > United States
- Hawaii (0.14)
- Genre:
- Research Report (0.82)
- Industry:
- Education (0.34)
- Health & Medicine > Therapeutic Area (0.46)
- Technology: