Goto

Collaborating Authors

 fedllm-bench


FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models Supplementary Materials 1 Dataset 1.1 Links and Preservation

Neural Information Processing Systems

The croissant metadata record is available at croissant. We chose GitHub and Google Drive respectively to store our code and dataset. Both are widely recognized as reliable data storage platforms, ensuring long-term preservation. We highly recommend downloading the raw data directly and following the provided instructions to simplify the data processing steps. Our dataset is structured as follows: the local directory contains client-specific data for local training, while all clients aggregates data from all clients for federated learning.



FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models

Neural Information Processing Systems

Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings. Use the Report an Issue link to request a name change.


FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models Supplementary Materials 1 Dataset 1.1 Links and Preservation

Neural Information Processing Systems

The croissant metadata record is available at croissant. We chose GitHub and Google Drive respectively to store our code and dataset. Both are widely recognized as reliable data storage platforms, ensuring long-term preservation. We highly recommend downloading the raw data directly and following the provided instructions to simplify the data processing steps. Our dataset is structured as follows: the local directory contains client-specific data for local training, while all clients aggregates data from all clients for federated learning.


FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models Rui Ye1 Rui Ge

Neural Information Processing Systems

Based on FedLLM-Bench, we conduct experiments on all datasets to benchmark existing FL methods and provide empirical insights (e.g., multilingual collaboration). We believe that our FedLLM-Bench can benefit the FedLLM community by reducing required efforts, providing a practical testbed, and promoting fair comparisons.



FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models

Ye, Rui, Ge, Rui, Zhu, Xinyu, Chai, Jingyi, Du, Yaxin, Liu, Yang, Wang, Yanfeng, Chen, Siheng

arXiv.org Artificial Intelligence

Federated learning has enabled multiple parties to collaboratively train large language models without directly sharing their data (FedLLM). Following this training paradigm, the community has put massive efforts from diverse aspects including framework, performance, and privacy. However, an unpleasant fact is that there are currently no realistic datasets and benchmarks for FedLLM and previous works all rely on artificially constructed datasets, failing to capture properties in real-world scenarios. Addressing this, we propose FedLLM-Bench, which involves 8 training methods, 4 training datasets, and 6 evaluation metrics, to offer a comprehensive testbed for the FedLLM community. FedLLM-Bench encompasses three datasets (e.g., user-annotated multilingual dataset) for federated instruction tuning and one dataset (e.g., user-annotated preference dataset) for federated preference alignment, whose scale of client number ranges from 38 to 747. Our datasets incorporate several representative diversities: language, quality, quantity, instruction, length, embedding, and preference, capturing properties in real-world scenarios. Based on FedLLM-Bench, we conduct experiments on all datasets to benchmark existing FL methods and provide empirical insights (e.g., multilingual collaboration). We believe that our FedLLM-Bench can benefit the FedLLM community by reducing required efforts, providing a practical testbed, and promoting fair comparisons.