FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and Federated LLMs
Han, Shanshan, Buyukates, Baturalp, Hu, Zijian, Jin, Han, Jin, Weizhao, Sun, Lichao, Wang, Xiaoyang, Wu, Wenxuan, Xie, Chulin, Yao, Yuhang, Zhang, Kai, Zhang, Qifan, Zhang, Yuhui, Avestimehr, Salman, He, Chaoyang
–arXiv.org Artificial Intelligence
This paper introduces FedMLSecurity, a benchmark designed to simulate adversarial attacks and corresponding defense mechanisms in Federated Learning (FL). As an integral module of the open-sourced library FedML that facilitates FL algorithm development and performance comparison, FedMLSecurity enhances FedML's capabilities to evaluate security issues and potential remedies in FL. FedMLSecurity comprises two major components: FedMLAttacker that simulates attacks injected during FL training, and FedMLDefender that simulates defensive mechanisms to mitigate the impacts of the attacks. FedMLSecurity is open-sourced and can be customized to a wide range of machine learning models (e.g., Logistic Regression, ResNet, GAN, etc.) and federated optimizers (e.g., FedAVG, FedOPT, FedNOVA, etc.). FedMLSecurity can also be applied to Large Language Models (LLMs) easily, demonstrating its adaptability and applicability in various scenarios.
arXiv.org Artificial Intelligence
Oct-6-2023
- Country:
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Texas (0.14)
- California > San Francisco County
- North America > United States
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Health & Medicine > Therapeutic Area (0.93)
- Information Technology > Security & Privacy (1.00)
- Technology: