Appendix T able of Contents
–Neural Information Processing Systems
We prove the result for each of the three possible cases of the loss function. Lemma A.3, for ( x,y) X Y, we have p Using Lemma A.2, we have The ASO formulation above motivated the authors of [59] Note that when Θ is a full rank matrix, this decomposition is unique. Several personalized FL formulations, e.g., D.1 Client-Server Algorithm Alg. 2 is a detailed version of Alg. 1 ( FedEM), with local SGD used as local solver. Alg. 3 gives our general algorithm for federated surrogate optimization, from which Alg. 2 is derived.Algorithm 2: FedEM: Federated Expectation-MaximizationInput: Data S Alg. 5 gives our general fully decentralized algorithm for federated surrogate optimization, from As mentioned in Section 3.3, the convergence of decentralized optimization schemes requires certain In our paper, we consider the following general assumption. We provide below the rigorous statement of Theorem 3.3, which was informally presented in's iterates satisfy the following inequalities after a large enough number of In particular, we provide the assumptions under which Alg. 3 and Alg. 5 converge.
Neural Information Processing Systems
Nov-14-2025, 19:07:32 GMT