Goto

Collaborating Authors

Notes for Federated Learning Tutorial (Part II)

#artificialintelligence

Federated setting is distinct in considering local-updating with heterogeneous data and partial device participation, often for non-convex objectives. Federated optimization methods that perform local-updating can significantly reduce communication rounds needed for convergence.


Federated Learning

#artificialintelligence

The five interns all look up. Brad, a burly caucasian jock, waves hello overenthusiastically. Kai, a nonbinary Japanese-American hacker, plays with a Rubix cube. Devi, a bubbly Indian-American networker, snaps a selfie. Mateo, a scrawny Hispanic bookworm, pauses in the middle of eating a sandwich. Aliyah, a sharply-dressed African-American security enthusiast, looks unimpressed.


Model Pruning Enables Efficient Federated Learning on Edge Devices

arXiv.org Machine Learning

Federated learning is a recent approach for distributed model training without sharing the raw data of clients. It allows model training using the large amount of user data collected by edge and mobile devices, while preserving data privacy. A challenge in federated learning is that the devices usually have much lower computational power and communication bandwidth than machines in data centers. Training large-sized deep neural networks in such a federated setting can consume a large amount of time and resources. To overcome this challenge, we propose a method that integrates model pruning with federated learning in this paper, which includes initial model pruning at the server, further model pruning as part of the federated learning process, followed by the regular federated learning procedure. Our proposed approach can save the computation, communication, and storage costs compared to standard federated learning approaches. Extensive experiments on real edge devices validate the benefit of our proposed method.


Federated learning with TensorFlow Federated (TF World '19)

#artificialintelligence

TensorFlow Federated (TFF) is an open-source framework for machine learning and other computations on decentralized data. TFF has been developed to facilitate open research and experimentation with Federated Learning (FL), an approach to machine learning where a shared global model is trained across many participating clients that keep their training data locally. By eliminating the need to collect data at a central location, yet still enabling each participant to benefit from the collective knowledge of everything in the network, FL lets you build intelligent applications that leverage insights from data that might be too costly, sensitive, or impractical to collect. In this session, we explain the key concepts behind FL and TFF, how to set up a FL experiment and run it in a simulator, what the code looks like and how to extend it, and we briefly discuss options for future deployment to real devices.


Towards Causal Federated Learning For Enhanced Robustness and Privacy

arXiv.org Artificial Intelligence

Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distributed training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (noni.i.d.), and Out of Distribution (OOD) generalization for the learned models can be poor. Besides this challenge, federated learning also remains vulnerable to various attacks on security wherein a few malicious participating entities work towards inserting backdoors, degrading the generated aggregated model as well as inferring the data owned by participating entities. In this paper, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup and analyse empirically how it enhances the Out of Distribution (OOD) accuracy as well as the privacy of the final learned model.