LoRA$^2$ : Multi-Scale Low-Rank Approximations for Fine-Tuning Large Language Models