Goto

Collaborating Authors

Top 10 Coding Tools For Federated Learning

#artificialintelligence

Federated Learning was introduced to collaboratively learn a shared prediction model while keeping all the training data on the device. This enabled machine learning developers to build pipelines that wouldn't require to store the data in the cloud. The main drivers behind FL are privacy and confidentiality concerns, regulatory compliance requirements, as well as the practicality of moving data to one central learning location. Here are a few libraries (mostly by OpenMined) for developers that can help in building federated learning systems for the edge devices. The developers can write the model and training plan in normal PyTorch and PySyft, and syft.js


Remote Data Science Part 2: Introduction to PySyft and PyGrid

#artificialintelligence

This post is a continuation of "Remote Data Science Part 1: Today's privacy challenges in BigData". The previous blog talks about the importance of understanding privacy challenges in BigData and explains how "Remote Data Science" enables three privacy guarantees for the data scientist and the data owner. This blog explains the different components of Remote Data Science. Understand "Model-centric FL" and "Data-centric FL" while both are deployable in Remote Data Science Architecture. PyGrid is a peer-to-peer network of data curators/owners and data scientists who can collectively train AI models using PySyft on decentralised data (Data never leaves the device).


Federated learning with TensorFlow Federated (TF World '19)

#artificialintelligence

TensorFlow Federated (TFF) is an open-source framework for machine learning and other computations on decentralized data. TFF has been developed to facilitate open research and experimentation with Federated Learning (FL), an approach to machine learning where a shared global model is trained across many participating clients that keep their training data locally. By eliminating the need to collect data at a central location, yet still enabling each participant to benefit from the collective knowledge of everything in the network, FL lets you build intelligent applications that leverage insights from data that might be too costly, sensitive, or impractical to collect. In this session, we explain the key concepts behind FL and TFF, how to set up a FL experiment and run it in a simulator, what the code looks like and how to extend it, and we briefly discuss options for future deployment to real devices.


What's New in Deep Learning Research: Understanding Federated Learning

#artificialintelligence

Last week I published a brief analysis of the OpenMined platform as one of the new technologies that is trying to enable truly decentralized artificial intelligence(AI) processes by leveraging blockchain technologies. In the article, I mentioned that OpenMined drew parts of its inspiration from Google's research about federated learning as a mechanism to improve on the traditional centralized approach to train AI models. From my perspective, I consider federated learning is one of the most interesting AI research breakthroughs of the last two years that is already powering mission critical applications.


Facebook Open-Sources Machine-Learning Privacy Library Opacus

#artificialintelligence

Facebook AI Research (FAIR) has announced the release of Opacus, a high-speed library for applying differential privacy techniques when training deep-learning models using the PyTorch framework. Opacus can achieve an order-of-magnitude speedup compared to other privacy libraries. The library was described on the FAIR blog. Opacus provides an API and implementation of a PrivacyEngine, which attaches directly to the PyTorch optimizer during training. By using hooks in the PyTorch Autograd component, Opacus can efficiently calculate per-sample gradients, a key operation for differential privacy.