Goto

Collaborating Authors

 misclassification error


Federated fairness-aware classification under differential privacy

Xue, Gengyu, Yu, Yi

arXiv.org Machine Learning

Privacy and algorithmic fairness have become two central issues in modern machine learning. Although each has separately emerged as a rapidly growing research area, their joint effect remains comparatively under-explored. In this paper, we systematically study the joint impact of differential privacy and fairness on classification in a federated setting, where data are distributed across multiple servers. Targeting demographic disparity constrained classification under federated differential privacy, we propose a two-step algorithm, namely FDP-Fair. In the special case where there is only one server, we further propose a simple yet powerful algorithm, namely CDP-Fair, serving as a computationally-lightweight alternative. Under mild structural assumptions, theoretical guarantees on privacy, fairness and excess risk control are established. In particular, we disentangle the source of the private fairness-aware excess risk into a) intrinsic cost of classification, b) cost of private classification, c) non-private cost of fairness and d) private cost of fairness. Our theoretical findings are complemented by extensive numerical experiments on both synthetic and real datasets, highlighting the practicality of our designed algorithms.








A Unified Optimization Framework for Multiclass Classification with Structured Hyperplane Arrangements

Blanco, Víctor, Kothari, Harshit, Luedtke, James

arXiv.org Artificial Intelligence

In this paper, we propose a new mathematical optimization model for multiclass classification based on arrangements of hyperplanes. Our approach preserves the core support vector machine (SVM) paradigm of maximizing class separation while minimizing misclassification errors, and it is computationally more efficient than a previous formulation. We present a kernel-based extension that allows it to construct nonlinear decision boundaries. Furthermore, we show how the framework can naturally incorporate alternative geometric structures, including classification trees, $\ell_p$-SVMs, and models with discrete feature selection. To address large-scale instances, we develop a dynamic clustering matheuristic that leverages the proposed MIP formulation. Extensive computational experiments demonstrate the efficiency of the proposed model and dynamic clustering heuristic, and we report competitive classification performance on both synthetic datasets and real-world benchmarks from the UCI Machine Learning Repository, comparing our method with state-of-the-art implementations available in scikit-learn.



Generalization Analysis for Classification on Korobov Space

Liu, Yuqing

arXiv.org Machine Learning

The challenge for misclassification problem in practice is that as the dimension grows large, the feature becomes into special forms. Therefore, a special structure contribution is required. The performance of classification of functions from Korobov space using shallow networks might be one of the possibilities to deal with the well. A binary classification problem with an input (compact metric) space X of instances and output space Y = { 1, 1} of two labels aims at learning a (binary) classifier from samples that separate the instances in X into two classes.