Tang, Yehui
EMS-SD: Efficient Multi-sample Speculative Decoding for Accelerating Large Language Models
Ni, Yunsheng, Liu, Chuanjian, Tang, Yehui, Han, Kai, Wang, Yunhe
Speculative decoding emerges as a pivotal technique for enhancing the inference speed of Large Language Models (LLMs). Despite recent research aiming to improve prediction efficiency, multi-sample speculative decoding has been overlooked due to varying numbers of accepted tokens within a batch in the verification phase. Vanilla method adds padding tokens in order to ensure that the number of new tokens remains consistent across samples. However, this increases the computational and memory access overhead, thereby reducing the speedup ratio. We propose a novel method that can resolve the issue of inconsistent tokens accepted by different samples without necessitating an increase in memory or computing overhead. Furthermore, our proposed method can handle the situation where the prediction tokens of different samples are inconsistent without the need to add padding tokens. Sufficient experiments demonstrate the efficacy of our method. Our code is available at https://github.com/niyunsheng/EMS-SD.
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
Jie, Shibo, Tang, Yehui, Ding, Ning, Deng, Zhi-Hong, Han, Kai, Wang, Yunhe
Current solutions for efficiently constructing large vision-language (VL) models follow a two-step paradigm: projecting the output of pre-trained vision encoders to the input space of pre-trained language models as visual prompts; and then transferring the models to downstream VL tasks via end-to-end parameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits inefficiency since it significantly increases the input length of the language models. In this paper, in contrast to integrating visual prompts into inputs, we regard visual prompts as additional knowledge that facilitates language models in addressing tasks associated with visual information. Motivated by the finding that Feed-Forward Network (FFN) of language models acts as "key-value memory", we introduce a novel approach termed memory-space visual prompting (MemVP), wherein visual prompts are concatenated with the weights of FFN for visual knowledge injection. Experimental results across various VL tasks and language models reveal that MemVP significantly reduces the training time and inference latency of the finetuned VL models and surpasses the performance of previous PEFT methods. Code: https://github.com/JieShibo/MemVP
Kangaroo: Lossless Self-Speculative Decoding via Double Early Exiting
Liu, Fangcheng, Tang, Yehui, Liu, Zhenhua, Ni, Yunsheng, Han, Kai, Wang, Yunhe
Speculative decoding has demonstrated its effectiveness in accelerating the inference of large language models while maintaining a consistent sampling distribution. However, the conventional approach of training a separate draft model to achieve a satisfactory token acceptance rate can be costly. Drawing inspiration from early exiting, we propose a novel self-speculative decoding framework \emph{Kangaroo}, which uses a fixed shallow sub-network as a self-draft model, with the remaining layers serving as the larger target model. We train a lightweight and efficient adapter module on top of the sub-network to bridge the gap between the sub-network and the full model's representation ability. It is noteworthy that the inference latency of the self-draft model may no longer be negligible compared to the large model, necessitating strategies to increase the token acceptance rate while minimizing the drafting steps of the small model. To address this challenge, we introduce an additional early exiting mechanism for generating draft tokens. Specifically, we halt the small model's subsequent prediction during the drafting phase once the confidence level for the current token falls below a certain threshold. Extensive experiments on the Spec-Bench demonstrate the effectiveness of Kangaroo. Under single-sequence verification, Kangaroo achieves speedups up to $1.68\times$ on Spec-Bench, outperforming Medusa-1 with 88.7\% fewer additional parameters (67M compared to 591M). The code for Kangaroo is available at https://github.com/Equationliu/Kangaroo.
DenseMamba: State Space Models with Dense Hidden Connection for Efficient Large Language Models
He, Wei, Han, Kai, Tang, Yehui, Wang, Chengcheng, Yang, Yujie, Guo, Tianyu, Wang, Yunhe
Large language models (LLMs) face a daunting challenge due to the excessive computational and memory requirements of the commonly used Transformer architecture. While state space model (SSM) is a new type of foundational network architecture offering lower computational complexity, their performance has yet to fully rival that of Transformers. This paper introduces DenseSSM, a novel approach to enhance the flow of hidden information between layers in SSMs. By selectively integrating shallowlayer hidden states into deeper layers, DenseSSM retains fine-grained information crucial for the final output. Dense connections enhanced DenseSSM still maintains the training parallelizability and inference efficiency. The proposed method can be widely applicable to various SSM types like RetNet and Mamba. With similar model size, DenseSSM achieves significant improvements, exemplified by DenseRetNet outperforming the original RetNet with up to 5% accuracy improvement on public benchmarks. code is avalaible at https://github.com/WailordHe/DenseSSM
Rethinking Optimization and Architecture for Tiny Language Models
Tang, Yehui, Liu, Fangcheng, Ni, Yunsheng, Tian, Yuchuan, Bai, Zheyuan, Hu, Yi-Qi, Liu, Sichao, Jui, Shangling, Han, Kai, Wang, Yunhe
The power of large language models (LLMs) has been demonstrated through numerous data and computing resources. However, the application of language models on mobile devices is facing huge challenge on the computation and memory costs, that is, tiny language models with high performance are urgently required. Limited by the highly complex training process, there are many details for optimizing language models that are seldom studied carefully. In this study, based on a tiny language model with 1B parameters, we carefully design a series of empirical study to analyze the effect of each component. Three perspectives are mainly discussed, \ie, neural architecture, parameter initialization, and optimization strategy. Several design formulas are empirically proved especially effective for tiny language models, including tokenizer compression, architecture tweaking, parameter inheritance and multiple-round training. Then we train PanGu-$\pi$-1B Pro and PanGu-$\pi$-1.5B Pro on 1.6T multilingual corpora, following the established formulas. Experimental results demonstrate the improved optimization and architecture yield a notable average improvement of 8.87 on benchmark evaluation sets for PanGu-$\pi$-1B Pro. Besides, PanGu-$\pi$-1.5B Pro surpasses a range of SOTA models with larger model sizes, validating its superior performance. The code is available at https://github.com/YuchuanTian/RethinkTinyLM.
A Survey on Transformer Compression
Tang, Yehui, Wang, Yunhe, Guo, Jianyuan, Tu, Zhijun, Han, Kai, Hu, Hailin, Tao, Dacheng
Abstract--Large models based on the Transformer architecture play increasingly vital roles in artificial intelligence, particularly within the realms of natural language processing (NLP) and computer vision (CV). Model compression methods reduce their memory and computational cost, which is a necessary step to implement the transformer models on practical devices. Given the unique architecture of transformer, featuring alternative attention and Feedforward Neural Network (FFN) modules, specific compression techniques are required. The efficiency of these compression methods is also paramount, as it is usually impractical to retrain large models on the entire training dataset. This survey provides a comprehensive review of recent compression methods, with a specific focus on their application to transformer models. The compression methods are primarily categorized into pruning, quantization, knowledge distillation, and efficient architecture design. In each category, we discuss compression methods for both CV and NLP tasks, highlighting common underlying principles. At last, we delve into the relation between various compression methods, and discuss the further directions in this domain. For example, When quantizing a full-precision model (MLP), convolutional neural network (CNN), recurrent neural (float32) into 8-bit integers, the memory cost can be reduced network (RNN), long short-term memory (LSTM), Transformers, by a factor of four. In recent times, transformer-based models have emerged as the be divided into post-training quantization(PTQ) or quantizationaware prevailing choice across various domains, including both natural training (QAT), in which the former only incurs limited language processing (NLP) and computer vision (CV) domains. Knowledge Considering their strong scaling ability, most of the large models distillation serves as a training strategy, which transfers knowledge with over billions of parameters are based on the transformer from a large model (teacher) to a smaller model (student). The architecture, which are considered as foundational elements for student mimics the behavior of the teacher by emulating the general artificial intelligence (AGI) [1], [2], [3], [4], [5], [6]. Notably, for advanced While large models have demonstrated significant capabilities, models like GPT-4, accessible only through APIs, their generated their exceptionally vast sizes pose challenges for practical instructions and explanations can also guide the learning of the development. For instance, the GPT-3 model has 175 billion student model [7], [8].In addition to obtaining models from predefined parameters and demands approximately about 350GB memory large models, some methods yield efficient architectures model storage (float16). The sheer volume of parameters and by directly reducing the computational complexity of attention the associated computational expenses necessitate devices with modules or FFN modules.
CBQ: Cross-Block Quantization for Large Language Models
Ding, Xin, Liu, Xiaoyu, Tu, Zhijun, Zhang, Yun, Li, Wei, Hu, Jie, Chen, Hanting, Tang, Yehui, Xiong, Zhiwei, Yin, Baoqun, Wang, Yunhe
Post-training quantization (PTQ) has played a key role in compressing large language models (LLMs) with ultra-low costs. However, existing PTQ methods only focus on handling the outliers within one layer or one block, which ignores the dependency of blocks and leads to severe performance degradation in low-bit settings. In this paper, we propose CBQ, a cross-block reconstruction-based PTQ method for LLMs. CBQ employs a cross-block dependency using a homologous reconstruction scheme, establishing long-range dependencies across multiple blocks to minimize error accumulation. Furthermore, CBQ incorporates a coarse-to-fine preprocessing (CFP) strategy for suppressing weight and activation outliers, coupled with an adaptive LoRA-Rounding technique for precise weight quantization. These innovations enable CBQ to not only handle extreme outliers effectively but also improve overall quantization accuracy. Extensive experiments show that CBQ achieves superior low-bit quantization (W4A4, W4A8, W2A16) and outperforms existing state-of-the-art methods across various LLMs and datasets. Notably, CBQ quantizes the 4-bit LLAMA1-65B model within only 4.3 hours on a single GPU, achieving a commendable tradeoff between performance and quantization efficiency.
PanGu-$\pi$: Enhancing Language Model Architectures via Nonlinearity Compensation
Wang, Yunhe, Chen, Hanting, Tang, Yehui, Guo, Tianyu, Han, Kai, Nie, Ying, Wang, Xutao, Hu, Hailin, Bai, Zheyuan, Wang, Yun, Liu, Fangcheng, Liu, Zhicheng, Guo, Jianyuan, Zeng, Sinan, Zhang, Yinchen, Xu, Qinghua, Liu, Qun, Yao, Jun, Xu, Chao, Tao, Dacheng
Abstract--The recent trend of large language models (LLMs) is to increase the scale of both model size (a.k.a the number of parameters) and dataset to achieve better generative ability, which is definitely proved by a lot of work such as the famous GPT and Llama. However, large models often involve massive computational costs, and practical applications cannot afford such high prices. However, the method of constructing a strong model architecture for LLMs is rarely discussed. We first analyze the state-of-the-art language model architectures and observe the feature collapse problem. Based on the theoretical analysis, we propose that the nonlinearity is also very important for language models, which is usually studied in convolutional neural networks for vision tasks. The series informed activation function is then introduced with tiny calculations that can be ignored, and an augmented shortcut is further used to enhance the model nonlinearity. We then demonstrate that the proposed approach is significantly effective for enhancing the model nonlinearity through carefully designed ablations; thus, we present a new efficient model architecture for establishing modern, namely, PanGu- ฯ . Experiments are then conducted using the same dataset and training strategy to compare PanGu- ฯ with state-of-the-art LLMs. The results show that PanGu- ฯ -7B can achieve a comparable performance to that of benchmarks with about 10% inference speed-up, and PanGu- ฯ -1B can achieve state-of-the-art performance in terms of accuracy and efficiency. In addition, we have deployed PanGu- ฯ -7B in the high-value domains of finance and law, developing an LLM named YunShan for practical application. The results show that YunShan can surpass other models with similar scales on benchmarks. As shown in Figure 1, our translation, text summarization, and dialogue.
A Survey on Visual Transformer
Han, Kai, Wang, Yunhe, Chen, Hanting, Chen, Xinghao, Guo, Jianyuan, Liu, Zhenhua, Tang, Yehui, Xiao, An, Xu, Chunjing, Xu, Yixing, Yang, Zhaohui, Zhang, Yiman, Tao, Dacheng
Transformer is a type of deep neural network mainly based on self-attention mechanism which is originally applied in natural language processing field. Inspired by the strong representation ability of transformer, researchers propose to extend transformer for computer vision tasks. Transformer-based models show competitive and even better performance on various visual benchmarks compared to other network types such as convolutional networks and recurrent networks. With high performance and without inductive bias defined by human, transformer is receiving more and more attention from the visual community. In this paper we provide a literature review of these visual transformer models by categorizing them in different tasks and analyze the advantages and disadvantages of these methods. In particular, the main categories include the basic image classification, high-level vision, low-level vision and video processing. The self-attention in computer vision is also briefly revisited as self-attention is the base component in transformer. Efficient transformer methods are included for pushing transformer into real applications on the devices. Finally, we give a discussion about the challenges and further research directions for visual transformers.
Bringing Giant Neural Networks Down to Earth with Unlabeled Data
Tang, Yehui, You, Shan, Xu, Chang, Shi, Boxin, Xu, Chao
Compressing giant neural networks has gained much attention for their extensive applications on edge devices such as cellphones. During the compressing process, one of the most important procedures is to retrain the pre-trained models using the original training dataset. However, due to the consideration of security, privacy or commercial profits, in practice, only a fraction of sample training data are made available, which makes the retraining infeasible. To solve this issue, this paper proposes to resort to unlabeled data in hand that can be cheaper to acquire. Specifically, we exploit the unlabeled data to mimic the classification characteristics of giant networks, so that the original capacity can be preserved nicely. Nevertheless, there exists a dataset bias between the labeled and unlabeled data, disturbing the mimicking to some extent. We thus fix this bias by an adversarial loss to make an alignment on the distributions of their low-level feature representations. We further provide theoretical discussions about how the unlabeled data help compressed networks to generalize better. Experimental results demonstrate that the unlabeled data can significantly improve the performance of the compressed networks.