Goto

Collaborating Authors

 col


FedAvgwithFineTuning: LocalUpdatesLeadto RepresentationLearning

Neural Information Processing Systems

Federated Learning (FL) [1]provides acommunication-efficient andprivacypreserving means to learn from data distributed across clients such as cell phones, autonomous vehicles, and hospitals. FL aims for each client to benefit from collaborating in the learning process without sacrificing data privacy or paying a substantial communication cost. Federated Averaging (FedAvg) [1] is the predominant FL algorithm.



Mesh-TensorFlow: Deep Learning for Supercomputers

Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, Blake Hechtman

Neural Information Processing Systems

However,batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately,efficient model-parallel algorithms tend tobe complicated todiscover, describe, and to implement, particularly on large clusters.


Sample Complexity of Interventional Causal Representation Learning

Neural Information Processing Systems

Consider a data-generation process that transforms low-dimensional latent causally-related variables to high-dimensional observed variables. Causal representation learning (CRL) is the process of using the observed data to recover the latent causal variables and the causal structure among them.





DecentralizedNoncooperativeGameswithCoupled Decision-DependentDistributions

Neural Information Processing Systems

Machine learning aims to generalize models trained on given datasets to make accurate predictions or decisions on new, unseen data (El Naqa and Murphy, 2015). The effectiveness of those models depends on the alignment between the training datasets and deployment environments (Quinonero-Candela et al.,2008).


Appendices

Neural Information Processing Systems

And, for each of them, the second (final) stripe has 44 options. It could seem that small improvements in efficacy may have only a minor effect on final network accuracy, especially considering the noisiness inherent in large-scale training. Better thanreducing themagnitude oflostweights, though, iscompletely eliminating it - by using the zeros already present in the unstructured sparse weight matrix, it may be possible to find a permutation that does notloseanymagnitude after applying theN:M constraint.