Federated Instruction Tuning of LLMs with Domain Coverage Augmentation

Wang, Zezhou, Du, Yaxin, Qian, Zhuzhong, Chen, Siheng

arXiv.org Artificial Intelligence 

To date, the factors affecting FedDIT remain unclear, and existing instruction augmentation methods primarily focus on the centralized setting without considering distributed environments. Our experiments reveal that the cross-client domain coverage, rather than data heterogeneity, drives model performance in FedDIT. In response, we propose FedDCA, which optimizes domain coverage through greedy client center selection and retrieval-based augmentation. Extensive experiments across four distinct domains (code, medical, financial, and mathematical) substantiate the effectiveness of both methods. Additionally, we investigate privacy preservation against memory extraction attacks utilizing various amounts of public data. Results show that there is no significant correlation between the volume of public data and the privacy-preserving capability. However, as the finetuning rounds increase, the risk of privacy leakage reduces or converges. Table 1: Performance(%) of different augmentation settings on each domain, conducted via FedAvg protocol with 10 clients. Additionally, we compare FedDCA with other two augmentation strategies: random sampling and direct retrieval (described in Appendix A.3), respectively. Recently, federated instruction tuning (FedIT) has gained attention as a novel approach that leverages the principles of federated learning (FL) to facilitate collaborative training of large language models (LLM) in distributed environments while maintaining the confidentiality of private data (McMahan et al., 2017; Ye et al., 2024b). This methodology allows for the exchange of model parameters among distributed data holders, thereby achieving a careful balance between privacy preservation and efficient model optimization.