MIRA: A Method of Federated MultI-Task Learning for LaRge LAnguage Models
Elbakary, Ahmed, Issaid, Chaouki Ben, ElBatt, Tamer, Seddik, Karim, Bennis, Mehdi
–arXiv.org Artificial Intelligence
In this paper, we introduce a method for fine-tuning Large Language Models (LLMs), inspired by Multi-Task learning in a federated manner. Our approach leverages the structure of each client's model and enables a learning scheme that considers other clients' tasks and data distribution. To mitigate the extensive computational and communication overhead often associated with LLMs, we utilize a parameter-efficient fine-tuning method, specifically Low-Rank Adaptation (LoRA), reducing the number of trainable parameters. Experimental results, with different datasets and models, demonstrate the proposed method's effectiveness compared to existing frameworks for federated fine-tuning of LLMs in terms of average and local performances. The proposed scheme outperforms existing baselines by achieving lower local loss for each client while maintaining comparable global performance.
arXiv.org Artificial Intelligence
Oct-20-2024
- Country:
- Africa > Middle East
- Egypt > Cairo Governorate > Cairo (0.05)
- Europe > Finland
- Northern Ostrobothnia > Oulu (0.04)
- Africa > Middle East
- Genre:
- Research Report (0.64)
- Technology: