Appendix for Efficient Low rank for Vision Transformer Adaptation A More Experimental Results for Full Training in Table 2 Section 4.2
–Neural Information Processing Systems
Table 5 shows more results for training the entire model. Indeed, these results further demonstrate the effectiveness of our LBP-WHT approach.Full Training Model Method R Speedup mAcc MFLOPs CF100 CF10 Cars Flowers Food PetsEfficient Former L1 (Hybrid) Full BP - 1.0 90.61 5841.09 " refers to our LBP-WHT method with "Hybrid" represents CNN-ViT -hybrid architecture. Any results that have higher speed or mAcc are highlighted in bold. On the other hand, LoRA efficiently reduces the memory usage needed to store the weights gradient. These results confirm the effectiveness of our method. " refers to our LBP-WHT method with As shown in Table 7, our method scales well on large scale datasets.
Neural Information Processing Systems
Oct-8-2025, 09:35:20 GMT
- Technology:
- Information Technology > Artificial Intelligence > Vision (0.51)