Zheng, Linghan
RecurFormer: Not All Transformer Heads Need Self-Attention
Yan, Ruiqing, Zheng, Linghan, Du, Xingbo, Zou, Han, Guo, Yufeng, Yang, Jianfei
Transformer-based large language models (LLMs) excel in modeling complex language patterns but face significant computational costs during inference, especially with long inputs due to the attention mechanism's memory overhead. We observe that certain attention heads exhibit a distribution where the attention weights concentrate on tokens near the query token, termed as recency aware, which focuses on local and short-range dependencies. Leveraging this insight, we propose RecurFormer, a novel architecture that replaces these attention heads with linear recurrent neural networks (RNNs), specifically the Mamba architecture. This replacement reduces the cache size without evicting tokens, thus maintaining generation quality. RecurFormer retains the ability to model long-range dependencies through the remaining attention heads and allows for reusing pretrained Transformer-based LLMs weights with continual training. Experiments demonstrate that RecurFormer matches the original model's performance while significantly enhancing inference efficiency. Our approach provides a practical solution to the computational challenges of Transformer-based LLMs inference, making it highly attractive for tasks involving long inputs. Transformer-based LLMs (OpenAI, 2023; Touvron et al., 2023) excel at modeling complex language patterns but come with significant computational costs.
Unveiling and Controlling Anomalous Attention Distribution in Transformers
Yan, Ruiqing, Du, Xingbo, Deng, Haoyu, Zheng, Linghan, Sun, Qiuzhuang, Hu, Jifang, Shao, Yuhang, Jiang, Penghao, Jiang, Jinrong, Zhao, Lian
With the advent of large models based on the Transformer architecture, researchers have observed an anomalous phenomenon in the Attention mechanism--there is a very high attention on the first element, which is prevalent across Transformer-based models. It is crucial to understand it for the development of techniques focusing on attention distribution, such as Key-Value (KV) Cache compression and infinite extrapolation; however, the latent cause leaves to be unknown. In this paper, we analyze such a phenomenon from the perspective of waiver phenomenon, which involves reducing the internal values of certain elements in the sequence, allowing them to absorb excess attention without affecting their contribution to information. In specific models, due to differences in positional encoding and attention patterns, we have found that the selection of waiver elements by the model can be categorized into two methods: positional-encoding-based and feature-distribution-within-elements-based.
Top in Chinese Data Processing: English Code Models
Zheng, Linghan, Liu, Hui, Lin, Xiaojun, Dong, Jiayuan, Sheng, Yue, Shi, Gang, Liu, Zhiwei, Chen, Hongwei
Recent advancements in the field of natural language processing (NLP) have led to the development of increasingly sophisticated models capable of understanding and generating human language with significant proficiency. Scaling up the size of language models has been shown to confer a range of benefits, such as improved performance and sample efficiency(Kaplan et al., 2020). Fine-tuning large models for diverse scenarios has also become consensus practice in the community.Traditionally, language models and code-based models(Rozière et al., 2023; Feng et al., 2020) have been separated into distinct categories based on their domains of expertise, with the former excelling in general linguistic tasks and the latter in programming-related scenarios. However, an interesting observation has arisen in our experiments with Chinese text data generation tasks--intuitively, one would expect such tasks to be dominated by Chinese domain-based language models, but code-based models trained on English datasets have, in fact, exhibited superior performance. This unexpected discovery challenges the traditional view that pre-trained models are domain-specific and calls for a more in-depth examination of their capabilities beyond their primary training language or format.
CAINNFlow: Convolutional block Attention modules and Invertible Neural Networks Flow for anomaly detection and localization tasks
Yan, Ruiqing, Zhang, Fan, Huang, Mengyuan, Liu, Wu, Hu, Dongyu, Li, Jinfeng, Liu, Qiang, Jiang, Jinrong, Guo, Qianjin, Zheng, Linghan
Detection of object anomalies is crucial in industrial processes, but unsupervised anomaly detection and localization is particularly important due to the difficulty of obtaining a large number of defective samples and the unpredictable types of anomalies in real life. Among the existing unsupervised anomaly detection and localization methods, the NF-based scheme has achieved better results. However, the two subnets (complex functions) $s_{i}(u_{i})$ and $t_{i}(u_{i})$ in NF are usually multilayer perceptrons, which need to squeeze the input visual features from 2D flattening to 1D, destroying the spatial location relationship in the feature map and losing the spatial structure information. In order to retain and effectively extract spatial structure information, we design in this study a complex function model with alternating CBAM embedded in a stacked $3\times3$ full convolution, which is able to retain and effectively extract spatial structure information in the normalized flow model. Extensive experimental results on the MVTec AD dataset show that CAINNFlow achieves advanced levels of accuracy and inference efficiency based on CNN and Transformer backbone networks as feature extractors, and CAINNFlow achieves a pixel-level AUC of $98.64\%$ for anomaly detection in MVTec AD.