Flanagan, Adrian
A Payload Optimization Method for Federated Recommender Systems
Khan, Farwa K., Flanagan, Adrian, Tan, Kuan E., Alamgir, Zareen, Ammad-Ud-Din, Muhammad
Federated Learning (FL) McMahan et al. [2017], a privacy-by-design machine learning approach, has introduced new ways to build recommender systems (RS). Unlike traditional approaches, the FL approach means that there is no longer a need to collect and store the users' private data on central servers, while making it possible to train robust recommendation models. In practice, FL distributes the model training process to the users' devices (i.e., the client or edge devices), thus allowing a global model to be trained using the user-specific local models. Each user updates the global model locally using their personal data and sends the local model updates to a server that aggregates them according to a pre-defined scheme. This is in order to update the global model. A prominent direction of research in this domain is based on Federated Collaborative Filtering (FCF) Ammad-Ud-Din et al. [2019], Chai et al. [2020], Dolui et al. [2019] that extends the standard Collaborative Filtering (CF) Hu et al. [2008] model to the federated mode. CF is one of the most frequently used matrix factorization models used to generate personalized recommendations either independently or in combination with other types of model Koren et al. [2009].
Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions
Blanco-Justicia, Alberto, Domingo-Ferrer, Josep, Martรญnez, Sergio, Sรกnchez, David, Flanagan, Adrian, Tan, Kuan Eeik
Federated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients that privately store their own training data. In contrast with centralized ML approaches, FL saves computation to the server and does not require the clients to outsource their private data to the server. However, FL is not free of issues. On the one hand, the model updates sent by the clients at each training epoch might leak information on the clients' private data. On the other hand, the model learnt by the server may be subjected to attacks by malicious clients; these security attacks might poison the model or prevent it from converging. In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack. Afterwards, we discuss the difficulty of simultaneously achieving security and privacy protection. Finally, we sketch ways to tackle this open problem and attain both security and privacy.
Federated Multi-view Matrix Factorization for Personalized Recommendations
Flanagan, Adrian, Oyomno, Were, Grigorievskiy, Alexander, Tan, Kuan Eeik, Khan, Suleiman A., Ammad-Ud-Din, Muhammad
We introduce the federated multi-view matrix factorization method that extends the federated learning framework to matrix factorization with multiple data sources. Our method is able to learn the multi-view model without transferring the user's personal data to a central server. As far as we are aware this is the first federated model to provide recommendations using multi-view matrix factorization. The model is rigorously evaluated on three datasets on production settings. Empirical validation confirms that federated multi-view matrix factorization outperforms simpler methods that do not take into account the multi-view structure of the data, in addition, it demonstrates the usefulness of the proposed method for the challenging prediction tasks of cold-start federated recommendations.
Federated Collaborative Filtering for Privacy-Preserving Personalized Recommendation System
Ammad-ud-din, Muhammad, Ivannikova, Elena, Khan, Suleiman A., Oyomno, Were, Fu, Qiang, Tan, Kuan Eeik, Flanagan, Adrian
The increasing interest in user privacy is leading to new privacy preserving machine learning paradigms. In the Federated Learning paradigm, a master machine learning model is distributed to user clients, the clients use their locally stored data and model for both inference and calculating model updates. The model updates are sent back and aggregated on the server to update the master model then redistributed to the clients. In this paradigm, the user data never leaves the client, greatly enhancing the user' privacy, in contrast to the traditional paradigm of collecting, storing and processing user data on a backend server beyond the user's control. In this paper we introduce, as far as we are aware, the first federated implementation of a Collaborative Filter. The federated updates to the model are based on a stochastic gradient approach. As a classical case study in machine learning, we explore a personalized recommendation system based on users' implicit feedback and demonstrate the method's applicability to both the MovieLens and an in-house dataset. Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user's privacy in a widely used recommender application while maintaining recommender performance.