PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models

Neural Information Processing Systems 

Gemma-7B fine-tuned with PiSSA achieves an accuracy of 77.7%, surpassing LoRA's 74.53% by 3.25%. Due to the same architecture, PiSSA is also compatible with quantization to further reduce the memory requirement of fine-tuning.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found