GlanceNets: Interpretabile, Leak-proof Concept-based Models
Marconato, Emanuele, Passerini, Andrea, Teso, Stefano
–arXiv.org Artificial Intelligence
There is growing interest in concept-based models (CBMs) that combine high-performance and interpretability by acquiring and reasoning with a vocabulary of high-level concepts. A key requirement is that the concepts be interpretable. Existing CBMs tackle this desideratum using a variety of heuristics based on unclear notions of interpretability, and fail to acquire concepts with the intended semantics. We address this by providing a clear definition of interpretability in terms of alignment between the model's representation and an underlying data generation process, and introduce GlanceNets, a new CBM that exploits techniques from disentangled representation learning and open-set recognition to achieve alignment, thus improving the interpretability of the learned concepts. We show that GlanceNets, paired with concept-level supervision, achieve better alignment than state-of-the-art approaches while preventing spurious information from unintendedly leaking into the learned concepts.
arXiv.org Artificial Intelligence
Oct-18-2022
- Country:
- Africa > Benin (0.04)
- Asia (0.04)
- Europe > Italy
- Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.04)
- Genre:
- Research Report (1.00)
- Technology: