HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy
Liu, Yongkang, Zhang, Yiqun, Li, Qian, Feng, Shi, Wang, Daling, Zhang, Yifei, Schütze, Hinrich
–arXiv.org Artificial Intelligence
Full-parameter fine-tuning has become the go-to choice for adapting language models (LMs) to downstream tasks due to its excellent performance. As LMs grow in size, fine-tuning the full parameters of LMs requires a prohibitively large amount of GPU memory. Existing approaches utilize zeroth-order optimizer [26, 27] to conserve GPU memory, which can potentially compromise the performance of LMs as non-zero order optimizers tend to converge more readily on most downstream tasks [27, 2]. In this paper, we propose a novel optimizer-independent end-to-end hierarchical fine-tuning strategy, HiFT, which only updates a subset of parameters at each training step. HiFT can significantly reduce the amount of gradients and optimizer state parameters residing in GPU memory at the same time, thereby reducing GPU memory usage. Our results demonstrate that: (1) HiFT achieves comparable performance to parameter-efficient fine-tuning and standard full parameter fine-tuning.
arXiv.org Artificial Intelligence
Jan-26-2024