Goto

Collaborating Authors

 moe


A Model Ensemble-Based Post-Processing Framework for Fairness-Aware Prediction

Zhao, Zhouting, Ng, Tin Lok James

arXiv.org Machine Learning

Striking an optimal balance between predictive performance and fairness continues to be a fundamental challenge in machine learning. In this work, we propose a post-processing framework that facilitates fairness-aware prediction by leveraging model ensembling. Designed to operate independently of any specific model internals, our approach is widely applicable across various learning tasks, model architectures, and fairness definitions. Through extensive experiments spanning classification, regression, and survival analysis, we demonstrate that the framework effectively enhances fairness while maintaining, or only minimally affecting, predictive accuracy.







Supplementaryto"DSelect-k: Differentiable SelectionintheMixtureofExpertswithApplications toMulti-TaskLearning "

Neural Information Processing Systems

MTL: InMTL, deep learning-based architectures that perform soft-parameter sharing, i.e., share model parameters partially, are proving to be effective at exploiting both the commonalities and differences among tasks [6]. Ourwork is also related to [5] who introduced "routers" (similar to gates) that can choose which layers or components of layers to activate per-task. The routers in the latter work are not differentiable and requirereinforcementlearning. To construct α, there are two cases to consider: (i)s = k and (ii) s < k. If s = k, then set αi = log(w ti) for i [k]. Our base case is fort = 1.