convt2d
Supplementary Material: IQV AEs A Tips for practical computation
The column titled "Time [s]" means We used the G-orbit average pooling for this work. QV AEs, and IOV AEs, the output is two Dense(256, 5) layers with linear activation functions. AUCROC and AUPRC are the areas underneath the entire ROC and PR curves, respectively. The color clouds correspond to defect classes. The colored clouds represent specific cell classes.
The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual Explanations
Haselhoff, Anselm, Trelenberg, Kevin, Küppers, Fabian, Schneider, Jonas
Visual counterfactual explanation (CF) methods modify image concepts, e.g, shape, to change a prediction to a predefined outcome while closely resembling the original query image. Unlike self-explainable models (SEMs) and heatmap techniques, they grant users the ability to examine hypothetical "what-if" scenarios. Previous CF methods either entail post-hoc training, limiting the balance between transparency and CF quality, or demand optimization during inference. To bridge the gap between transparent SEMs and CF methods, we introduce the GdVAE, a self-explainable model based on a conditional variational autoencoder (CVAE), featuring a Gaussian discriminant analysis (GDA) classifier and integrated CF explanations. Full transparency is achieved through a generative classifier that leverages class-specific prototypes for the downstream task and a closed-form solution for CFs in the latent space. The consistency of CFs is improved by regularizing the latent space with the explainer function. Extensive comparisons with existing approaches affirm the effectiveness of our method in producing high-quality CF explanations while preserving transparency. Code and models are public.
- Europe > Germany (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > Canada > Ontario > Toronto (0.04)