Goto

Collaborating Authors

 sensitive feature


Fairness under Graph Uncertainty: Achieving Interventional Fairness with Partially Known Causal Graphs over Clusters of Variables

Chikahara, Yoichi

arXiv.org Machine Learning

Algorithmic decisions about individuals require predictions that are not only accurate but also fair with respect to sensitive attributes such as gender and race. Causal notions of fairness align with legal requirements, yet many methods assume access to detailed knowledge of the underlying causal graph, which is a demanding assumption in practice. We propose a learning framework that achieves interventional fairness by leveraging a causal graph over \textit{clusters of variables}, which is substantially easier to estimate than a variable-level graph. With possible \textit{adjustment cluster sets} identified from such a cluster causal graph, our framework trains a prediction model by reducing the worst-case discrepancy between interventional distributions across these sets. To this end, we develop a computationally efficient barycenter kernel maximum mean discrepancy (MMD) that scales favorably with the number of sensitive attribute values. Extensive experiments show that our framework strikes a better balance between fairness and accuracy than existing approaches, highlighting its effectiveness under limited causal graph knowledge.



Supplementary Material In this supplementary, we first provide an overview of our proof techniques in Appendix A and then

Neural Information Processing Systems

Our analysis of the generalization error is based on an extension of Gordon's Gaussian process inequality [ R is a continuous function, which is convex in the first argument and concave in the second argument. The main result of CGMT is to connect the above two random optimization problems. The CGMT framework has been used to infer statistical properties of estimators in certain high-dimensional asymptotic regime. Second, derive the point-wise limit of the AO objective in terms of a convex-concave optimization problem, over only few scalar variables.



Supplementary material - ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods

Neural Information Processing Systems

We used the sex and the education of the student's parents as the sensitive attributes for this dataset. We removed all features that are other expressions of the labels (i.e. Note that this is the only folktables dataset on which we report results in the main paper. Sex, age, and rage are used as sensitive features for this datasets. We deem these features as not relevant for this use case.




ABCFair: an Adaptable Benchmark approach for Comparing Fairness Methods

Neural Information Processing Systems

Numerous methods have been implemented that pursue fairness with respect to sensitive features by mitigating biases in machine learning. Yet, the problem settings that each method tackles vary significantly, including the stage of intervention, the composition of sensitive features, the fairness notion, and the distribution of the output. Even in binary classification, the greatest common denominator of problem settings is small, significantly complicating benchmarking.Hence, we introduce ABCFair, a benchmark approach which allows adapting to the desiderata of the real-world problem setting, enabling proper comparability between methods for any use case. We apply this benchmark to a range of pre-, in-, and postprocessing methods on both large-scale, traditional datasets and on a dual label (biased and unbiased) dataset to sidestep the fairness-accuracy trade-off.


Beyond Verification: Abductive Explanations for Post-AI Assessment of Privacy Leakage

Sonna, Belona, Grastien, Alban, Benn, Claire

arXiv.org Artificial Intelligence

Privacy leakage in AI-based decision processes poses significant risks, particularly when sensitive information can be inferred. We propose a formal framework to audit privacy leakage using abductive explanations, which identifies minimal sufficient evidence justifying model decisions and determines whether sensitive information disclosed. Our framework formalizes both individual and system-level leakage, introducing the notion of Potentially Applicable Explanations (P AE) to identify individuals whose outcomes can shield those with sensitive features. This approach provides rigorous privacy guarantees while producing human-understandable explanations, a key requirement for auditing tools. Experimental evaluation on the German Credit Dataset illustrates how the importance of sensitive literal in the model decision process affects privacy leakage. Despite computational challenges and simplifying assumptions, our results demonstrate that abductive reasoning enables interpretable privacy auditing, offering a practical pathway to reconcile transparency, model interpretability, and privacy preserving in AI decision-making.


FusionDP: Foundation Model-Assisted Differentially Private Learning for Partially Sensitive Features

Zeng, Linghui, Liu, Ruixuan, Sarkar, Atiquer Rahman, Jiang, Xiaoqian, Ho, Joyce C., Xiong, Li

arXiv.org Artificial Intelligence

Ensuring the privacy of sensitive training data is crucial in privacy-preserving machine learning. However, in practical scenarios, privacy protection may be required for only a subset of features. For instance, in ICU data, demographic attributes like age and gender pose higher privacy risks due to their re-identification potential, whereas raw lab results are generally less sensitive. Traditional DP-SGD enforces privacy protection on all features in one sample, leading to excessive noise injection and significant utility degradation. We propose FusionDP, a two-step framework that enhances model utility under feature-level differential privacy. First, FusionDP leverages large foundation models to impute sensitive features given non-sensitive features, treating them as external priors that provide high-quality estimates of sensitive attributes without accessing the true values during model training. Second, we introduce a modified DP-SGD algorithm that trains models on both original and imputed features while formally preserving the privacy of the original sensitive features. We evaluate FusionDP on two modalities: a sepsis prediction task on tabular data from PhysioNet and a clinical note classification task from MIMIC-III. By comparing against privacy-preserving baselines, our results show that FusionDP significantly improves model performance while maintaining rigorous feature-level privacy, demonstrating the potential of foundation model-driven imputation to enhance the privacy-utility trade-off for various modalities.