anycbm
AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model
Dominici, Gabriele, Barbiero, Pietro, Giannini, Francesco, Gjoreski, Martin, Langhenirich, Marc
Interpretable deep learning aims at developing neural architectures whose decision-making processes could be understood by their users. Among these techniqes, Concept Bottleneck Models enhance the interpretability of neural networks by integrating a layer of human-understandable concepts. These models, however, necessitate training a new model from the beginning, consuming significant resources and failing to utilize already trained large models. To address this issue, we introduce "AnyCBM", a method that transforms any existing trained model into a Concept Bottleneck Model with minimal impact on computational resources. We provide both theoretical and experimental insights showing the effectiveness of AnyCBMs in terms of classification performances and effectivenss of concept-based interventions on downstream tasks.
- North America > United States (0.14)
- Europe > Switzerland (0.04)
- Europe > Middle East > Malta > Port Region > Southern Harbour District > Valletta (0.04)
- Europe > Italy > Tuscany > Pisa Province > Pisa (0.04)
- Transportation > Air (0.53)
- Government (0.47)