Goto

Collaborating Authors

 convolutiondepth


GlanceNets: Interpretable, Leak-proof Concept-basedModels

Neural Information Processing Systems

One reason is that the notion of interpretability is notoriously challenging to pin down, andtherefore existing CBMs rely ondifferent heuristics--such asencouraging theconcepts tobe sparse [1], orthonormal to each other [5], or match the contents of concrete examples [3]--with unclear properties and incompatible goals.