Fair Representation Learning for Heterogeneous Information Networks
Zeng, Ziqian, Islam, Rashidul, Keya, Kamrun Naher, Foulds, James, Song, Yangqiu, Pan, Shimei
–arXiv.org Artificial Intelligence
Recently, much attention has been paid to the societal impact of AI, especially concerns regarding its fairness. A growing body of research has identified unfair AI systems and proposed methods to debias them, yet many challenges remain. Representation learning for Heterogeneous Information Networks (HINs), a fundamental building block used in complex network mining, has socially consequential applications such as automated career counseling, but there have been few attempts to ensure that it will not encode or amplify harmful biases, e.g. sexism in the job market. To address this gap, in this paper we propose a comprehensive set of de-biasing methods for fair HINs representation learning, including sampling-based, projection-based, and graph neural networks (GNNs)-based techniques. We systematically study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy. We evaluate the performance of the proposed methods in an automated career counseling application where we mitigate gender bias in career recommendation. Based on the evaluation results on two datasets, we identify the most effective fair HINs representation learning techniques under different conditions.
arXiv.org Artificial Intelligence
Apr-18-2021
- Country:
- North America > United States > Maryland (0.28)
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology (0.93)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks (1.00)
- Performance Analysis > Accuracy (0.46)
- Statistical Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning (0.93)
- Machine Learning
- Communications > Social Media (1.00)
- Data Science > Data Mining (1.00)
- Artificial Intelligence
- Information Technology