Improving Fairness of Large Language Models in Multi-document Summarization
Li, Haoyuan, Zhang, Rui, Chaturvedi, Snigdha
–arXiv.org Artificial Intelligence
Fairness in multi-document summarization (MDS) is crucial for providing comprehensive views across documents with diverse social attribute values, which can significantly impact decision-making. For example, a summarization system that tends to overrepresent negative reviews of products can mislead customers into disregarding good products. Previous works measure fairness in MDS at two levels: summary-level and corpus-level. While summary-level fairness focuses on individual summaries, corpus-level fairness focuses on a corpus of summaries. Recent methods primarily focus on summary-level fairness. We propose FairPO, a preference tuning method that focuses on both summary-level and corpus-level fairness in MDS. To improve summary-level fairness, we propose to generate preference pairs by perturbing document sets. To improve corpus-level fairness, we propose fairness-aware preference tuning by dynamically adjusting the weights of preference pairs. Our experiments show that FairPO outperforms strong baselines while maintaining the critical qualities of summaries. The code is available at https://github.com/leehaoyuan/coverage_fairnes.
arXiv.org Artificial Intelligence
Jun-13-2025
- Country:
- Asia > China
- Hong Kong (0.04)
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- California > San Diego County
- San Diego (0.04)
- North Carolina (0.04)
- Pennsylvania (0.04)
- California > San Diego County
- Canada > Ontario
- Asia > China
- Genre:
- Research Report (1.00)
- Technology: