Goto

Collaborating Authors

 An-Yeu, null


LATTE: Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer

arXiv.org Artificial Intelligence

With the rise of Transformer models in NLP and CV domain, Multi-Head Attention has been proven to be a game-changer. However, its expensive computation poses challenges to the model throughput and efficiency, especially for the long sequence tasks. Exploiting the sparsity in attention has been proven to be an effective way to reduce computation. Nevertheless, prior works do not consider the various distributions among different heads and lack a systematic method to determine the threshold. To address these challenges, we propose Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer (LATTE). LATTE employs a headwise threshold-based filter with the low-precision dot product and computation reuse mechanism to reduce the computation of MHA. Moreover, the trainable threshold is introduced to provide a systematic method for adjusting the thresholds and enable end-to-end optimization. Experimental results indicate LATTE can smoothly adapt to both NLP and CV tasks, offering significant computation savings with only a minor compromise in performance. Also, the trainable threshold is shown to be essential for the leverage between the performance and the computation. As a result, LATTE filters up to 85.16% keys with only a 0.87% accuracy drop in the CV task and 89.91% keys with a 0.86 perplexity increase in the NLP task.


C3-SL: Circular Convolution-Based Batch-Wise Compression for Communication-Efficient Split Learning

arXiv.org Artificial Intelligence

Most existing studies improve the efficiency of Split learning (SL) by compressing the transmitted features. However, most works focus on dimension-wise compression that transforms high-dimensional features into a low-dimensional space. In this paper, we propose circular convolution-based batch-wise compression for SL (C3-SL) to compress multiple features into one single feature. To avoid information loss while merging multiple features, we exploit the quasi-orthogonality of features in high-dimensional space with circular convolution and superposition. To the best of our knowledge, we are the first to explore the potential of batch-wise compression under the SL scenario. Based on the simulation results on CIFAR-10 and CIFAR-100, our method achieves a 16x compression ratio with negligible accuracy drops compared with the vanilla SL. Moreover, C3-SL significantly reduces 1152x memory and 2.25x computation overhead compared to the state-of-the-art dimension-wise compression method.


AdaBoost-assisted Extreme Learning Machine for Efficient Online Sequential Classification

arXiv.org Machine Learning

In this paper, we propose an AdaBoost - assisted extreme learning machine for efficient online sequential classification (AOS - ELM) . In order to achieve better accuracy in online sequential learning scenarios, we utilize the cost - sensitive algorithm - AdaBoost, which diversifying the weak classifiers, and addin g the forgetting mechanism, which stabilizing the performance during the training procedure . Hence, AOS - ELM adapt s bet ter to sequentially arrived data compared with other voting based methods. The experim ent results show AOS - ELM can achieve 9 4.41 % accuracy on MNIST dataset, which is the theoretical accuracy bound performed by original batch learning algorithm, AdaBoost - EL M. Moreover, with the forgetting mechanism, the standard deviation of accuracy during the online sequential learning process is reduced to 8.26x.