Goto

Collaborating Authors

 imp 0


Supplementary Material

Neural Information Processing Systems

The supplementary material is organized as follows. First, we prove Proposition 1 and Theorem 1. In this section we prove Proposition 1, and some preliminary lemmas. Definition 4. Let the function Algorithm 1 for all i [m] and k 0. Let us define the following terms: g We will make use of the following notation for the history of the method. These samples are assumed to be independent across clients.



A Randomized Zeroth-Order Hierarchical Framework for Heterogeneous Federated Learning

Qiu, Yuyang, Kim, Kibaek, Yousefian, Farzad

arXiv.org Artificial Intelligence

Heterogeneity in federated learning (FL) is a critical and challenging aspect that significantly impacts model performance and convergence. In this paper, we propose a novel framework by formulating heterogeneous FL as a hierarchical optimization problem. This new framework captures both local and global training processes through a bilevel formulation and is capable of the following: (i) addressing client heterogeneity through a personalized learning framework; (ii) capturing the pre-training process on the server side; (iii) updating the global model through nonstandard aggregation; (iv) allowing for nonidentical local steps; and (v) capturing clients' local constraints. We design and analyze an implicit zeroth-order FL method (ZO-HFL), equipped with nonasymptotic convergence guarantees for both the server-agent and the individual client-agents, and asymptotic guarantees for both the server-agent and client-agents in an almost sure sense. Notably, our method does not rely on standard assumptions in heterogeneous FL, such as the bounded gradient dissimilarity condition. We implement our method on image classification tasks and compare with other methods under different heterogeneous settings.


When Pattern-by-Pattern Works: Theoretical and Empirical Insights for Logistic Models with Missing Values

Muller, Christophe, Scornet, Erwan, Josse, Julie

arXiv.org Machine Learning

Predicting a response with partially missing inputs remains a challenging task even in parametric models, since parameter estimation in itself is not sufficient to predict on partially observed inputs. Several works study prediction in linear models. In this paper, we focus on logistic models, which present their own difficulties. From a theoretical perspective, we prove that a Pattern-by-Pattern strategy (PbP), which learns one logistic model per missingness pattern, accurately approximates Bayes probabilities in various missing data scenarios (MCAR, MAR and MNAR). Empirically, we thoroughly compare various methods (constant and iterative imputations, complete case analysis, PbP, and an EM algorithm) across classification, probability estimation, calibration, and parameter inference. Our analysis provides a comprehensive view on the logistic regression with missing values. It reveals that mean imputation can be used as baseline for low sample sizes, and improved performance is obtained via nonlinear multiple iterative imputation techniques with the labels ( MICE.RF.Y). For large sample sizes, PbP is the best method for Gaussian mixtures, and we recommend MICE.RF.Y in presence of nonlinear features.