Fairness in Submodular Maximization over a Matroid Constraint

Halabi, Marwa El, Tarnawski, Jakub, Norouzi-Fard, Ashkan, Vuong, Thuy-Duong

arXiv.org Artificial Intelligence 

Machine learning algorithms are increasingly used in decision-making processes. This can potentially lead to the introduction or perpetuation of bias and discrimination in automated decisions. Of particular concern are sensitive areas such as education, hiring, credit access, bail decisions, and law enforcement (Munoz et al., 2016; White House OSTP, 2022; European Union FRA, 2022). There has been a growing body of work attempting to mitigate these risks by developing fair algorithms for fundamental problems including classification (Zafar et al., 2017), ranking(Celis et al., 2018c), clustering (Chierichetti et al., 2017), voting (Celis et al., 2018a), matching (Chierichetti et al., 2019), influence maximization (Tsang et al., 2019), data summarization (Celis et al., 2018b), and many others. In this work, we address fairness in the fundamental problem of submodular maximization over a matroid constraint, in the offline setting. Submodular functions model a diminishing returns property that naturally occurs in a variety of machine learning problems such as active learning (Golovin and Krause, 2011), data summarization (Lin and Bilmes, 2011), feature selection (Das and Kempe, 2011), and recommender systems (El-Arini and Guestrin, 2011). Matroids represent a popular and expressive notion of independence systems that encompasses a broad spectrum of useful constraints, e.g.