Yu, Xingda
LoR2C : Low-Rank Residual Connection Adaptation for Parameter-Efficient Fine-Tuning
Zhao, Jiancheng, Yu, Xingda, Zhang, Yuxiang, Yang, Zhen
--In recent years, pretrained large language models have demonstrated outstanding performance across various natural language processing tasks. However, full-parameter fine-tuning methods require adjusting all model parameters, leading to immense computational resource demands. Although parameter-efficient fine-tuning methods like LoRA have significantly reduced the number of parameters, they still face challenges such as gradient vanishing and the potential for further parameter reduction. T o address these issues, this paper proposes a novel parameter-efficient fine-tuning method called LoR 2 C (Low-Rank Residual Connection Adaptation). LoR 2 C introduces residual connections with low-rank matrices within the model layers, which not only reduces the number of fine-tuning parameters but also effectively alleviates the gradient vanishing problem. Additionally, this paper presents three optimization variants of LoR 2 C: ShareLoR 2 C, MergeLoR 2 C, and InjectLoR 2 C. These variants further improve parameter efficiency and model performance through parameter sharing, module merging, and injection mechanisms, respectively. I NTRODUCTION In recent years, the scale of large language models (LLM) has grown rapidly and these models have demonstrated exceptional performance on various tasks. However, despite the significant performance improvements that full parameter fine-tuning (FT) can bring, adjusting all the model parameters not only consumes massive computational resources, but also may lead to overfitting and inefficient training. To address these challenges, researchers have proposed Parameter-Efficient Fine-Tuning (PEFT) methods aimed at reducing computational costs while maintaining fine-tuning effectiveness. LoRA [1] emerged in this context.
Adaptive H&E-IHC information fusion staining framework based on feature extra
Jia, Yifan, Yu, Xingda, Ji, Zhengyang, Lai, Songning, Yue, Yutao
Immunohistochemistry (IHC) staining plays a significant role in the evaluation of diseases such as breast cancer. The H&E-to-IHC transformation based on generative models provides a simple and cost-effective method for obtaining IHC images. Although previous models can perform digital coloring well, they still suffer from (i) coloring only through the pixel features that are not prominent in HE, which is easy to cause information loss in the coloring process; (ii) The lack of pixel-perfect H&E-IHC groundtruth pairs poses a challenge to the classical L1 loss.To address the above challenges, we propose an adaptive information enhanced coloring framework based on feature extractors. We first propose the VMFE module to effectively extract the color information features using multi-scale feature extraction and wavelet transform convolution, while combining the shared decoder for feature fusion. The high-performance dual feature extractor of H&E-IHC is trained by contrastive learning, which can effectively perform feature alignment of HE-IHC in high latitude space. At the same time, the trained feature encoder is used to enhance the features and adaptively adjust the loss in the HE section staining process to solve the problems related to unclear and asymmetric information. We have tested on different datasets and achieved excellent performance.Our code is available at https://github.com/babyinsunshine/CEFF