group membership
- Europe (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia (0.04)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- (7 more...)
- North America > United States (0.14)
- North America > Canada (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis (0.46)
Probabilistic Fair Clustering
In clustering problems, a central decision-maker is given a complete metric graph over vertices and must provide a clustering of vertices that minimizes some objective function. In fair clustering problems, vertices are endowed with a color (e.g., membership in a group), and the requirements of a valid clustering might also include the representation of colors in the solution. Prior work in fair clustering assumes complete knowledge of group membership. In this paper, we generalize this by assuming imperfect knowledge of group membership through probabilistic assignments, and present algorithms in this more general setting with approximation ratio guarantees. We also address the problem of metric membership, where group membership has a notion of order and distance. Experiments are conducted using our proposed algorithms as well as baselines to validate our approach, and also surface nuanced concerns when group membership is not known deterministically.
Fairness for the People, by the People: Minority Collective Action
Ben-Dov, Omri, Samadi, Samira, Sanyal, Amartya, Ţifrea, Alexandru
Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many models rely on user-contributed data, end-users can induce fairness through the framework of Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process. We propose three practical, model-agnostic methods to approximate ideal relabeling and validate them on real-world datasets. Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- North America > United States > Florida > Broward County (0.04)
- (5 more...)
- Health & Medicine (0.67)
- Education > Educational Setting (0.46)
- Banking & Finance (0.46)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- North America > Canada (0.04)
- Europe (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia (0.04)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- (7 more...)
- North America > United States (0.14)
- North America > Canada (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis (0.46)
- North America > United States (0.28)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Information Technology (0.93)
- Health & Medicine (0.67)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- North America > Canada (0.04)
Incorporating Fairness Constraints into Archetypal Analysis
Alcacer, Aleix, Epifanio, Irene
Archetypal Analysis (AA) is an unsupervised learning method that represents data as convex combinations of extreme patterns called archetypes. While AA provides interpretable and low-dimensional representations, it can inadvertently encode sensitive attributes, leading to fairness concerns. In this work, we propose Fair Archetypal Analysis (FairAA), a modified formulation that explicitly reduces the influence of sensitive group information in the learned projections. We also introduce FairKernelAA, a nonlinear extension that addresses fairness in more complex data distributions. Our approach incorporates a fairness regularization term while preserving the structure and interpretability of the archetypes. We evaluate FairAA and FairKernelAA on synthetic datasets, including linear, nonlinear, and multi-group scenarios, demonstrating their ability to reduce group separability -- as measured by mean maximum discrepancy and linear separability -- without substantially compromising explained variance. We further validate our methods on the real-world ANSUR I dataset, confirming their robustness and practical utility. The results show that FairAA achieves a favorable trade-off between utility and fairness, making it a promising tool for responsible representation learning in sensitive applications.
- North America > United States (0.28)
- Europe > Spain (0.04)