LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Zhang, Renrui, Han, Jiaming, Liu, Chris, Gao, Peng, Zhou, Aojun, Hu, Xiangfei, Yan, Shilin, Lu, Pan, Li, Hongsheng, Qiao, Yu
–arXiv.org Artificial Intelligence
Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model, and costs less than one hour for fine-tuning on 8 A100 GPUs. Specifically, we adopt a set of learnable adaption prompts, and prepend them to the word tokens at higher transformer layers. Then, a zero-initialized attention mechanism with zero gating is proposed, which adaptively injects the new instructional cues into LLaMA, while effectively preserves its pre-trained knowledge. With our efficient training, LLaMA-Adapter can generate high-quality responses, comparable to Alpaca with fully fine-tuned 7B parameters. Besides language commands, our approach can be simply extended to multi-modal instructions for learning image-conditioned LLaMA model, which achieves superior reasoning performance on ScienceQA and COCO Caption benchmarks. Furthermore, we also evaluate the zero-initialized attention mechanism for fine-tuning other pre-trained models (ViT, RoBERTa) on traditional vision and language tasks, demonstrating the superior generalization capacity of our approach.
arXiv.org Artificial Intelligence
Jun-14-2023
- Country:
- North America
- Canada > Newfoundland and Labrador (0.14)
- Mexico (0.69)
- United States > California (0.14)
- North America
- Genre:
- Personal (0.47)
- Research Report (0.50)
- Industry:
- Technology: