Fairness-Aware Domain Generalization under Covariate and Dependence Shifts
Zhao, Chen, Jiang, Kai, Wu, Xintao, Wang, Haoliang, Khan, Latifur, Grant, Christan, Chen, Feng
–arXiv.org Artificial Intelligence
While modern fairness-aware machine learning techniques have demonstrated significant success in various applications [1, 2, 3], their primary objective is to facilitate equitable decision-making, ensuring fairness across all demographic groups, regardless of sensitive attributes, such as race and gender. Nevertheless, state-of-the-art methods can encounter severe shortcomings during the inference phase, mainly due to poor generalization when the spurious correlation deviates from the patterns seen in the training data. This correlation can manifest either between model outcomes and sensitive attributes [4, 5] or between model outcomes and non-semantic data features [6]. This issue originates from the existence of out-of-distribution (OOD) data, resulting in catastrophic failures. Over the past decade, the machine learning community has made significant strides in studying the OOD generalization (or domain generalization, DG) problem and attributing the cause of the poor generalization to the distribution shifts from source domains to target domains. There are two dominant shift types [7]: concept shift and covariate shift. Concept shift refers to OOD samples drawn from a distribution with semantic change e.g., dog v.s.
arXiv.org Artificial Intelligence
Nov-23-2023
- Country:
- North America > United States > Texas > Dallas County > Richardson (0.14)
- Genre:
- Research Report
- New Finding (0.67)
- Promising Solution (0.48)
- Research Report
- Technology: