Edge-FIT: Federated Instruction Tuning of Quantized LLMs for Privacy-Preserving Smart Home Environments
Venkatesh, Vinay, Kamanuru, Vamsidhar R, Kumar, Lav, Kothari, Nikita
–arXiv.org Artificial Intelligence
This paper proposes Edge-FIT (Federated Instruction Tuning on the Edge), a scalable framework for Federated Instruction Tuning (FIT) of Large Language Models (LLMs). Traditional Federated Learning (TFL) methods, like FedAvg, fail when confronted with the massive parameter size of LLMs [3], [6]. Our Edge-FIT framework combines federated learning with 4-bit Quantized Low-Rank Adaptation (QLORA), mitigating the core issues of communication and computational overhead. We demonstrate this by filtering the general-purpose Databricks Dolly 15k dataset for the IoT domain. Experimental results show the Edge-FIT tuned Llama 2(7B) achieves an F1-Score of 0.89. We also demonstrate a viable trade-off using the 3.8B Phi-3-mini model, validating Edge-FIT as a scalable framework for decentralized LLM deployment on home compute gateways.
arXiv.org Artificial Intelligence
Oct-7-2025
- Country:
- North America > United States > California
- San Diego County > San Diego (0.04)
- Santa Clara County > Mountain View (0.04)
- North America > United States > California
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Technology: