Enhanced Computationally Efficient Long LoRA Inspired Perceiver Architectures for Auto-Regressive Language Modeling

Mahmood, Kaleel, Huang, Shaoyi

arXiv.org Artificial Intelligence 

The Transformer architecture has revolutionized the Natural Language Processing field and is the backbone of Large Language Models (LLMs). The Transformer uses the attention mechanism that computes the pair-wise similarity between its input tokens to produce latent vectors that are able to understand the semantic meaning of the input text. One of the challenges in the Transformer architecture is the quadratic complexity of the attention mechanism that prohibits the efficient processing of long sequence lengths. One of the important works in this respect is the Perceiver class of architectures that have demonstrated excellent performance while reducing the computation complexity. In this paper, we use the PerceiverAR that was proposed for Auto-Regressive modeling as a baseline, and provide three different architectural enhancements to it with varying computation overhead tradeoffs. Inspired by the recently proposed efficient attention computation approach of Long-LoRA, we then present an equally efficient Perceiver-based architecture (termed as Long LoRA Pereceiver - LLP) that can be used as the base architecture in LLMs instead of just a fine-tuning add-on. Our results on different benchmarks indicate impressive improvements compared to recent Transformer based models. The Transformer architecture has revolutionized the field of artificial intelligence, especially in Natural Language Processing (NLP) Vaswani (2017). The recent success of Large Language models such as ChatGPT Achiam et al. (2023), Gemini Team et al. (2023), Llama Touvron et al. (2023); Dubey et al. (2024), etc. with their comprehension and reasoning capabilities, is a testament to the effectiveness of the Transformer architecture. Prior to Transformers, deep Convolutional Neural Networks (CNNs) had demonstrated amazingly well results in computer vision applications, however, their performance does not show the same effectiveness when applied to NLP.