Mitigating Clever Hans Strategies in Image Classifiers through Generating Counterexamples

Bender, Sidney, Delzer, Ole, Herrmann, Jan, Marxfeld, Heike Antje, Müller, Klaus-Robert, Montavon, Grégoire

arXiv.org Artificial Intelligence 

Deep learning models remain vulnerable to spurious correlations, leading to so-called Clever Hans predictors that undermine robustness even in large-scale foundation and self-supervised models. Group distributional robustness methods, such as Deep Feature Reweighting (DFR) rely on explicit group labels to upweight underrepresented subgroups, but face key limitations: (1) group labels are often unavailable, (2) low within-group sample sizes hinder coverage of the subgroup distribution, and (3) performance degrades sharply when multiple spurious correlations fragment the data into even smaller groups. We propose Counterfactual Knowledge Distillation (CFKD), a framework that sidesteps these issues by generating diverse counterfactuals, enabling a human annotator to efficiently explore and correct the model's decision boundaries through a knowledge distillation step. Our method does not require any confounder labels, achieves effective scaling to multiple confounders, and yields balanced generalization across groups. We demonstrate CFKD's efficacy across five datasets, spanning synthetic tasks to an industrial application, with particularly strong gains in low-data regimes with pronounced spurious correlations. Additionally, we provide an ablation study on the effect of the chosen counterfactual explainer and teacher model, highlighting their impact on robustness.1. Introduction Deep learning has achieved remarkable progress in recent years, delivering state-of-the-art performance across a wide range of domains, including computer vision, natural language processing, and biomedical applications. However, despite these advancements, models frequently rely on spurious features--also known as confounders--which can give rise to so-called Clever Hans (CH) predictors [1, 2]. Such models may fit training data well and achieve high validation accuracy, yet fail catastrophically when deployed under realistic conditions.