Fairness in Graph Mining: A Survey

Dong, Yushun, Ma, Jing, Wang, Song, Chen, Chen, Li, Jundong

arXiv.org Artificial Intelligence 

Abstract--Graph mining algorithms have been playing a significant role in myriad fields over the years. However, despite their promising performance on various graph analytical tasks, most of these algorithms lack fairness considerations. As a consequence, they could lead to discrimination towards certain populations when exploited in human-centered applications. Recently, algorithmic fairness has been extensively studied in graph-based applications. In contrast to algorithmic fairness on independent and identically distributed (i.i.d.) data, fairness in graph mining has exclusive backgrounds, taxonomies, and fulfilling techniques. In this survey, we provide a comprehensive and up-to-date introduction of existing literature under the context of fair graph mining. Specifically, we propose a novel taxonomy of fairness notions on graphs, which sheds light on their connections and differences. We further present an organized summary of existing techniques that promote fairness in graph mining. Finally, we discuss current research challenges and open questions, aiming at encouraging cross-breeding ideas and further advances. Graph-structured data is pervasive in diverse real-world Compared with achieving fairness in the context of independent applications, e.g., E-commerce [102], [121], health care [37], and identically distributed (i.i.d.) data, fulfilling [53], traffic forecasting [72], [100], and drug discovery [15], fairness in graph mining can be non-trivial due to two [172]. The first challenge is to formulate proper have been proposed to gain a deeper understanding of such fairness notions as the criteria to determine the existence of data. These algorithms have shown promising performance unfairness (i.e., bias). Although a vast amount of traditional on graph analytical tasks such as node classification [59], algorithmic fairness notions have been proposed centered [86], [161] and link prediction [4], [103], [109], contributing on i.i.d. For example, the same population can be most of them lack fairness considerations. Consequently, connected with different topologies as in Figure 1a and 1b, they could yield discriminatory results towards certain populations where each node represents an individual, and the color when such algorithms are exploited in humancentered of nodes denotes their demographic subgroup membership, applications [80]. Compared with the graph topology job recommender system may unfavorably recommend in Figure 1a, the topology in Figure 1b has more intra-group fewer job opportunities to individuals of a certain edges than inter-group edges. The dominance of intra-group gender [97] or individuals in an underrepresented ethnic edges in the graph topology is a common type of bias group [150].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found