FedCF: Fair Federated Conformal Prediction

Srinivasan, Anutam, Vadlamani, Aditya T., Meghrazi, Amin, Parthasarathy, Srinivasan

arXiv.org Artificial Intelligence 

Conformal Prediction (CP) is a widely used technique for quantifying uncertainty in machine learning models. In its standard form, CP offers probabilistic guarantees on the coverage of the true label, but it is agnostic to sensitive attributes in the dataset. Several recent works have sought to incorporate fairness into CP by ensuring conditional coverage guarantees across different subgroups. One such method is Conformal Fairness (CF). In this work, we extend the CF framework to the Federated Learning setting and discuss how we can audit a federated model for fairness by analyzing the fairness-related gaps for different demographic groups. Ensuring model fairness is a critical thrust of trustworthy machine learning (ML). ML models, when not calibrated for fairness, are prone to developing biases at each stage of an ML pipeline, as reflected by their predictions Mehrabi et al. (2021). We define bias as disparate performance (i.e., accuracy for classification) between different sub-populations. In the data collection phase, measurement bias may occur due to disproportionate data collection on sub-populations, while representation bias manifests from a lack of training data on specific strata. During training, these biases are inductively learned by the model-leading to incorrect predictions in safety-critical tasks. These models are also susceptible to algorithmic bias, resulting from regularization and optimization techniques during model training, which incorrectly generalize for marginal-ized groups. To mitigate these risks, many ML models must adhere to regulations placed by local governing bodies (Hirsch et al., 2023). Towards model compliance, Komala et al. (2024); Agrawal et al. (2024); Jones et al. (2025) have proposed approaches to enhance model fairness in varying tasks, including federated graph learning and representation learning.