Goto

Collaborating Authors

What is Federated Learning(FL)? Techniques & Benefits in 2021

#artificialintelligence

Federated learning is a machine learning method that enables machine learning models obtain experience from different data sets located in different sites (e.g. This allows personal data to remain in local sites, reducing possibility of personal data breaches. Federated learning is used to train other machine learning algorithms by using multiple local datasets without exchanging data. This allows companies to create a shared global model without putting training data in a central location. In machine learning, there are 2 steps, training and inference.


Federated Learning- The Perfect Amalgamation of IoT and AI!

#artificialintelligence

One regular Tuesday morning I was looking for new papers on edge Internet of Things networks, like we all do, and came across a new term called Federated Learning and immediately went on a quest to understand this technique. Federated Learning(FL) in its simplest form is a union between the vivacious worlds of Machine Learning and the Internet of Things. It is an ML-based solution that improves the functionality of edge devices in IoT networks. Edge devices in an Internet of Things network are the mobile devices that collect data and help in achieving the purpose of the network remotely instead of the traditionally connected nodes in a network. Our smartphones are the most common example of edge devices. According to the Google AI Blog from 2017- "Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device, decoupling the ability to do machine learning from the need to store the data in the cloud."


Difference between distributed learning versus federated learning algorithms - KDnuggets

#artificialintelligence

Distributed machine learning algorithm is a multi-nodal system that builds training models by independent training on different nodes. Having a distributed training system accelerates training on huge amounts of data. When working with big data, training time exponentially increases which makes scalability and online re-training. For example, let's say we want to build a recommendation model, and based on the user interaction everyday, we wish to re-train the models. We could see the user-interaction as high as hundreds of clicks per user and millions of users.


Cross-Domain Federated Learning in Medical Imaging

arXiv.org Artificial Intelligence

Federated learning is increasingly being explored in the field of medical imaging to train deep learning models on large scale datasets distributed across different data centers while preserving privacy by avoiding the need to transfer sensitive patient information. In this manuscript, we explore federated learning in a multi-domain, multi-task setting wherein different participating nodes may contain datasets sourced from different domains and are trained to solve different tasks. We evaluated cross-domain federated learning for the tasks of object detection and segmentation across two different experimental settings: multi-modal and multi-organ. The result from our experiments on cross-domain federated learning framework were very encouraging with an overlap similarity of 0.79 for organ localization and 0.65 for lesion segmentation. Our results demonstrate the potential of federated learning in developing multi-domain, multi-task deep learning models without sharing data from different domains.


Federated Learning Meets Natural Language Processing: A Survey

arXiv.org Artificial Intelligence

Federated Learning aims to learn machine learning models from multiple decentralized edge devices (e.g. mobiles) or servers without sacrificing local data privacy. Recent Natural Language Processing techniques rely on deep learning and large pre-trained language models. However, both big deep neural and language models are trained with huge amounts of data which often lies on the server side. Since text data is widely originated from end users, in this work, we look into recent NLP models and techniques which use federated learning as the learning framework. Our survey discusses major challenges in federated natural language processing, including the algorithm challenges, system challenges as well as the privacy issues. We also provide a critical review of the existing Federated NLP evaluation methods and tools. Finally, we highlight the current research gaps and future directions.