mixcem
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Zarlenga, Mateo Espinosa, Dominici, Gabriele, Barbiero, Pietro, Shams, Zohreh, Jamnik, Mateja
In this paper, we investigate how concept-based models (CMs) respond to out-of-distribution (OOD) inputs. CMs are interpretable neural architectures that first predict a set of high-level concepts (e.g., stripes, black) and then predict a task label from those concepts. In particular, we study the impact of concept interventions (i.e., operations where a human expert corrects a CM's mispredicted concepts at test time) on CMs' task predictions when inputs are OOD. Our analysis reveals a weakness in current state-of-the-art CMs, which we term leakage poisoning, that prevents them from properly improving their accuracy when intervened on for OOD inputs. To address this, we introduce MixCEM, a new CM that learns to dynamically exploit leaked information missing from its concepts only when this information is in-distribution. Our results across tasks with and without complete sets of concept annotations demonstrate that MixCEMs outperform strong baselines by significantly improving their accuracy for both in-distribution and OOD samples in the presence and absence of concept interventions.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > California (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.92)
- Government > Regional Government (0.67)
- Health & Medicine > Therapeutic Area (0.46)