Goto

Collaborating Authors

Federated Learning Could Be the Next Big Thing for Data Privacy

#artificialintelligence

What it sounds like: a government-mandated curriculum. What it is: an emerging, privacy-focused form of machine learning. Google introduced the technique in 2017 to test on Android keyboard suggestions. Using it, smartphones could locally store and process info about which suggestions a user chose and surrounding context. That helped improve the keyboard algorithm without sacrificing as much user privacy.


FedVision: An Online Visual Object Detection Platform Powered by Federated Learning

arXiv.org Machine Learning

Visual object detection is a computer vision-based artificial intelligence (AI) technique which has many practical applications (e.g., fire hazard monitoring). However, due to privacy concerns and the high cost of transmitting video data, it is highly challenging to build object detection models on centrally stored large training datasets following the current approach. Federated learning (FL) is a promising approach to resolve this challenge. Nevertheless, there currently lacks an easy to use tool to enable computer vision application developers who are not experts in federated learning to conveniently leverage this technology and apply it in their systems. In this paper, we report FedVision - a machine learning engineering platform to support the development of federated learning powered computer vision applications. The platform has been deployed through a collaboration between W eBankand Extreme Vision to help customers develop computer vision-based safety monitoring solutions in smart city applications. Over four months of usage, it has achieved significant efficiency improvement and cost reduction while removing the need to transmit sensitive data for three major corporate customers. To the best of our knowledge, this is the first real application of FL in computer vision-based tasks.


Pretraining Federated Text Models for Next Word Prediction

arXiv.org Machine Learning

Federated learning is a decentralized approach for training models on distributed devices, by summarizing local changes and sending aggregate parameters from local models to the cloud rather than the data itself. In this research we employ the idea of transfer learning to federated training for next word prediction (NWP) and conduct a number of experiments demonstrating enhancements to current baselines for which federated NWP models have been successful. Specifically, we compare federated training baselines from randomly initialized models to various combinations of pretraining approaches including pretrained word embeddings and whole model pretraining followed by federated fine tuning for NWP on a dataset of Stack Overflow posts. We realize lift in performance using pretrained embeddings without exacerbating the number of required training rounds or memory footprint. We also observe notable differences using centrally pretrained networks, especially depending on the datasets used. Our research offers effective, yet inexpensive, improvements to federated NWP and paves the way for more rigorous experimentation of transfer learning techniques for federated learning.


Federated Learning: A Decentralized Form of Machine Learning

#artificialintelligence

Most major consumer tech companies that are focused on AI and machine learning now use federated learning – a form of machine learning that trains algorithms on devices distributed across a network, without the need for data to leave each device. Given the increasing awareness of privacy issues, federated learning could become the preferred method of machine learning for use cases that use sensitive data (such as location, financial, or health data). Machine learning algorithms and the data sets that they are trained on are usually centralized. The data is brought from edge devices (mobile phones, tablets, laptops, and industrial IoT devices) to a centralized server, where machine learning algorithms crunch it to gain insight. However, researchers have found that a central server doesn't need to be in the loop.


Multi-Center Federated Learning

arXiv.org Machine Learning

Federated learning has received great attention for its capability to train a large-scale model in a decentralized manner without needing to access user data directly. It helps protect the users' private data from centralized collecting. Unlike distributed machine learning, federated learning aims to tackle non-IID data from heterogeneous sources in various real-world applications, such as those on smartphones. Existing federated learning approaches usually adopt a single global model to capture the shared knowledge of all users by aggregating their gradients, regardless of the discrepancy between their data distributions. However, due to the diverse nature of user behaviors, assigning users' gradients to different global models (i.e., centers) can better capture the heterogeneity of data distributions across users. Our paper proposes a novel multi-center aggregation mechanism for federated learning, which learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers. We formulate the problem as a joint optimization that can be efficiently solved by a stochastic expectation maximization (EM) algorithm. Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.