To be Robust and to be Fair: Aligning Fairness with Robustness

Chai, Junyi, Wang, Xiaoqian

arXiv.org Artificial Intelligence 

As machine learning systems have been increasingly applied in social fields, it is imperative that machine learning models do not reflect real-world discrimination. However, machine learning models have shown biased predictions against disadvantaged groups on several real-world tasks (Larson et al., 2016; Dressel and Farid, 2018; Mehrabi et al., 2021a). In order to improve fairness and reduce discrimination of machine learning systems, a variety of work has been proposed to quantify and rectify bias (Hardt et al., 2016; Kleinberg et al., 2016; Mitchell et al., 2018). Despite the emerging interest in fairness, the topic of adversarial fairness attack and robustness against such attack have not yet been properly discussed. Most of current literature on adversarial training has been focusing on improving robustness against accuracy attack (Chakraborty et al., 2018), while the problem of adversarial attack and adversarial training w.r.t.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found