Weinert, Christian
HyFL: A Hybrid Framework For Private Federated Learning
Marx, Felix, Schneider, Thomas, Suresh, Ajith, Wehrle, Tobias, Weinert, Christian, Yalame, Hossein
Federated learning (FL) has emerged as an efficient approach for large-scale distributed machine learning, ensuring data privacy by keeping training data on client devices. However, recent research has highlighted vulnerabilities in FL, including the potential disclosure of sensitive information through individual model updates and even the aggregated global model. While much attention has been given to clients' data privacy, limited research has addressed the issue of global model privacy. Furthermore, local training at the client's side has opened avenues for malicious clients to launch powerful model poisoning attacks. Unfortunately, no existing work has provided a comprehensive solution that tackles all these issues. Therefore, we introduce HyFL, a hybrid framework that enables data and global model privacy while facilitating large-scale deployments. The foundation of HyFL is a unique combination of secure multi-party computation (MPC) techniques with hierarchical federated learning. One notable feature of HyFL is its capability to prevent malicious clients from executing model poisoning attacks, confining them to less destructive data poisoning alone. We evaluate HyFL's effectiveness using an open-source PyTorch-based FL implementation integrated with Meta's CrypTen PPML framework. Our performance evaluation demonstrates that HyFL is a promising solution for trustworthy large-scale FL deployment.
ScionFL: Efficient and Robust Secure Quantized Aggregation
Ben-Itzhak, Yaniv, Möllering, Helen, Pinkas, Benny, Schneider, Thomas, Suresh, Ajith, Tkachenko, Oleksandr, Vargaftik, Shay, Weinert, Christian, Yalame, Hossein, Yanai, Avishay
Secure aggregation is commonly used in federated learning (FL) to alleviate privacy concerns related to the central aggregator seeing all parameter updates in the clear. Unfortunately, most existing secure aggregation schemes ignore two critical orthogonal research directions that aim to (i) significantly reduce client-server communication and (ii) mitigate the impact of malicious clients. However, both of these additional properties are essential to facilitate cross-device FL with thousands or even millions of (mobile) participants. In this paper, we unite both research directions by introducing ScionFL, the first secure aggregation framework for FL that operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients. Our framework leverages (novel) multi-party computation (MPC) techniques and supports multiple linear (1-bit) quantization schemes, including ones that utilize the randomized Hadamard transform and Kashin's representation. Our theoretical results are supported by extensive evaluations. We show that with no overhead for clients and moderate overhead on the server side compared to transferring and processing quantized updates in plaintext, we obtain comparable accuracy for standard FL benchmarks. Additionally, we demonstrate the robustness of our framework against state-of-the-art poisoning attacks.
Trustworthy AI Inference Systems: An Industry Research View
Cammarota, Rosario, Schunter, Matthias, Rajan, Anand, Boemer, Fabian, Kiss, Ágnes, Treiber, Amos, Weinert, Christian, Schneider, Thomas, Stapf, Emmanuel, Sadeghi, Ahmad-Reza, Demmler, Daniel, Chen, Huili, Hussain, Siam Umar, Riazi, Sadegh, Koushanfar, Farinaz, Gupta, Saransh, Rosing, Tajan Simunic, Chaudhuri, Kamalika, Nejatollahi, Hamid, Dutt, Nikil, Imani, Mohsen, Laine, Kim, Dubey, Anuj, Aysu, Aydin, Hosseini, Fateme Sadat, Yang, Chengmo, Wallace, Eric, Norton, Pamela
In this work, we provide an industry research view for approaching the design, deployment, and operation of trustworthy Artificial Intelligence (AI) inference systems. Such systems provide customers with timely, informed, and customized inferences to aid their decision, while at the same time utilizing appropriate security protection mechanisms for AI models. Additionally, such systems should also use Privacy-Enhancing Technologies (PETs) to protect customers' data at any time. To approach the subject, we start by introducing trends in AI inference systems. We continue by elaborating on the relationship between Intellectual Property (IP) and private data protection in such systems. Regarding the protection mechanisms, we survey the security and privacy building blocks instrumental in designing, building, deploying, and operating private AI inference systems. For example, we highlight opportunities and challenges in AI systems using trusted execution environments combined with more recent advances in cryptographic techniques to protect data in use. Finally, we outline areas of further development that require the global collective attention of industry, academia, and government researchers to sustain the operation of trustworthy AI inference systems.