Goto

Collaborating Authors

 efficientformer-l1



Appendix for Efficient Low rank for Vision Transformer Adaptation A More Experimental Results for Full Training in Table 2 Section 4.2

Neural Information Processing Systems

Table 5 shows more results for training the entire model. Indeed, these results further demonstrate the effectiveness of our LBP-WHT approach.Full Training Model Method R Speedup mAcc MFLOPs CF100 CF10 Cars Flowers Food PetsEfficient Former L1 (Hybrid) Full BP - 1.0 90.61 5841.09 " refers to our LBP-WHT method with "Hybrid" represents CNN-ViT -hybrid architecture. Any results that have higher speed or mAcc are highlighted in bold. On the other hand, LoRA efficiently reduces the memory usage needed to store the weights gradient. These results confirm the effectiveness of our method. " refers to our LBP-WHT method with As shown in Table 7, our method scales well on large scale datasets.



Appendix A Latency Driven Slimming Algorithm

Neural Information Processing Systems

We provide the details of the proposed latency-driven fast slimming in Alg. 1. Formulations of the Our major conclusions and speed analysis can be found in Sec. 3 and Figure 1. Compared to non-overlap large-kernel patch embedding (V5 in Tab. MHSA with the global receptive field is an essential contribution to model performance. By comparing V1 and V2 in Tab. 3, we can observe that the GN We explore ReLU and HardSwish (V3 and V4 in Tab. 3) in addition to GeLU We draw a conclusion that the activation function can be selected on a case-by-case basis depending on the specific hardware and compiler. In this work, we use GeLU to provide better performance than ReLU while executing faster.