RecycleGPT: An Autoregressive Language Model with Recyclable Module

Jiang, Yufan, He, Qiaozhi, Zhuang, Xiaomin, Wu, Zhihua, Wang, Kunpeng, Zhao, Wenlai, Yang, Guangwen

arXiv.org Artificial Intelligence 

Existing large language models have to run K times to generate a sequence of K tokens. Our approach relies on the observation that adjacent tokens in a sequence usually have strong correlations and the next token in a sequence can be reasonably guessed or inferred based on the preceding ones. Experiments and analysis demonstrate the effectiveness of our approach in lowering inference latency, achieving up to 1.4x speedup while preserving high performance. Large language models (LLMs) (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023; Chowdhery et al., 2022; Biderman et al., 2023; Smith et al., 2022) have revolutionized the field of natural language generation for their abilities in generating satisfactory text across various application domains. The excellent performance benefits greatly from the scaling of model size (100B+ parameters), but at the same time, the fact remains that a single decoding step gets slower as the model gets larger. In addition to the immense computation introduced by larger models, a larger memory footprint is also a major factor causing slower inference of LLMs (Dao et al., 2022; Pope et al., 2023). This large memory footprint includes the trained model parameters, the temporary state used during inference, and in addition to these, the KV cache is also stored in memory.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found