Goto

Collaborating Authors

 Dong, Kaixiang


The Double-Edged Sword of Input Perturbations to Robust Accurate Fairness

arXiv.org Artificial Intelligence

Deep neural networks (DNNs) are known to be sensitive to adversarial input perturbations, leading to a reduction in either prediction accuracy or individual fairness. To jointly characterize the susceptibility of prediction accuracy and individual fairness to adversarial perturbations, we introduce a novel robustness definition termed robust accurate fairness. Informally, robust accurate fairness requires that predictions for an instance and its similar counterparts consistently align with the ground truth when subjected to input perturbations. We propose an adversarial attack approach dubbed RAFair to expose false or biased adversarial defects in DNN, which either deceive accuracy or compromise individual fairness. Then, we show that such adversarial instances can be effectively addressed by carefully designed benign perturbations, correcting their predictions to be accurate and fair. Our work explores the double-edged sword of input perturbations to robust accurate fairness in DNN and the potential of using benign perturbations to correct adversarial instances.


RobustFair: Adversarial Evaluation through Fairness Confusion Directed Gradient Search

arXiv.org Artificial Intelligence

Deep neural networks (DNNs) often face challenges due to their vulnerability to various adversarial perturbations, including false perturbations that undermine prediction accuracy and biased perturbations that cause biased predictions for similar inputs. This paper introduces a novel approach, RobustFair, to evaluate the accurate fairness of DNNs when subjected to these false or biased perturbations. RobustFair employs the notion of the fairness confusion matrix induced in accurate fairness to identify the crucial input features for perturbations. This matrix categorizes predictions as true fair, true biased, false fair, and false biased, and the perturbations guided by it can produce a dual impact on instances and their similar counterparts to either undermine prediction accuracy (robustness) or cause biased predictions (individual fairness). RobustFair then infers the ground truth of these generated adversarial instances based on their loss function values approximated by the total derivative. To leverage the generated instances for trustworthiness improvement, RobustFair further proposes a data augmentation strategy to prioritize adversarial instances resembling the original training set, for data augmentation and model retraining. Notably, RobustFair excels at detecting intertwined issues of robustness and individual fairness, which are frequently overlooked in standard robustness and individual fairness evaluations. This capability empowers RobustFair to enhance both robustness and individual fairness evaluations by concurrently identifying defects in either domain. Empirical case studies and quantile regression analyses on benchmark datasets demonstrate the effectiveness of the fairness confusion matrix guided perturbation for false or biased adversarial instance generation.