$ϕ$-test: Global Feature Selection and Inference for Shapley Additive Explanations
Kim, Dongseok, Choi, Hyoungsun, Rasool, Mohamed Jismy Aashik, Oh, Gisung
We propose $ϕ$-test, a global feature-selection and significance procedure for black-box predictors that combines Shapley attributions with selective inference. Given a trained model and an evaluation dataset, $ϕ$-test performs SHAP-guided screening and fits a linear surrogate on the screened features via a selection rule with a tractable selective-inference form. For each retained feature, it outputs a Shapley-based global score, a surrogate coefficient, and post-selection $p$-values and confidence intervals in a global feature-importance table. Experiments on real tabular regression tasks with tree-based and neural backbones suggest that $ϕ$-test can retain much of the predictive ability of the original model while using only a few features and producing feature sets that remain fairly stable across resamples and backbone classes. In these settings, $ϕ$-test acts as a practical global explanation layer linking Shapley-based importance summaries with classical statistical inference.
Dec-9-2025