Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption
Frery, Jordan, Bredehoft, Roman, Klemsa, Jakub, Meyre, Arthur, Stoian, Andrei
–arXiv.org Artificial Intelligence
Preserving data confidentiality during the fine-tuning of open-source Large Language Models (LLMs) is crucial for sensitive applications. This work introduces an interactive protocol adapting the Low-Rank Adaptation (LoRA) technique for private fine-tuning. Homomorphic Encryption (HE) protects the confidentiality of training data and gradients handled by remote worker nodes performing the bulk of computations involving the base model weights. The data owner orchestrates training, requiring minimal local computing power and memory, thus alleviating the need for expensive client-side GPUs. We demonstrate feasibility by fine-tuning a Llama-3.2-1B model, presenting convergence results using HE-compatible quantization and performance benchmarks for HE computations on GPU hardware. This approach enables applications such as confidential knowledge base question answering, private codebase fine-tuning for AI code assistants, AI agents for drafting emails based on a company's email archive, and adapting models to analyze sensitive legal or healthcare documents.
arXiv.org Artificial Intelligence
May-13-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Maryland > Baltimore (0.04)
- Massachusetts > Suffolk County
- Boston (0.04)
- New York > New York County
- New York City (0.04)
- South America
- Colombia > Bogotá D.C.
- Bogotá (0.04)
- Peru
- Huánuco Department (0.04)
- Loreto Department (0.04)
- Pasco Department (0.04)
- Colombia > Bogotá D.C.
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (1.00)
- Technology: