Andrew
federated_pca_paper_neurips.pdf
Andrew
We present a federated, asynchronous, and (ε, δ)-differentially private algorithm for PCA in the memory-limited setting. Our algorithm incrementally computes local model updates using a streaming procedure and adaptively estimates its r leading principal components when only O(dr) memory is available with d being the dimensionality of the data.
federated_pca_paper_neurips.pdf
Andrew
This comes as supplementary material to the paper Federated Principal Component Analysis. The appendix is structured as follows: 1. Federated-PCA's local update guarantees, 2. Federated-PCA's differential privacy properties, 3. In-depth analysis of algorithm's federation, 4. Additional evaluation and discussion. Furthermore, we complement our theoretical analysis with additional empirical evaluation on synthetic and real datasets which include details on memory consumption. We note that the local updating procedure in Algorithm 3 inherits some theoretical guarantees from [17]. We leverage on these to provide a bound for the adaptive case. The informal objective is to find an r-dimensional subspace U that provides the best approximation with respect to the mass of μ.
federated_pca_paper_neurips.pdf
Andrew
We present a federated, asynchronous, and (ε, δ)-differentially private algorithm for PCA in the memory-limited setting. Our algorithm incrementally computes local model updates using a streaming procedure and adaptively estimates its r leading principal components when only O(dr) memory is available with d being the dimensionality of the data.