DeepCoT: Deep Continual Transformers for Real-Time Inference on Data Streams
Picón, Ginés Carreto, Zhou, Peng Yuan, Zhang, Qi, Iosifidis, Alexandros
–arXiv.org Artificial Intelligence
Abstract--Transformer-based models have dramatically increased their size and parameter count to tackle increasingly complex tasks. At the same time, there is a growing demand for low-latency inference on resource-constrained devices that achieves high performance. In particular, stream data inference is typically performed over a sliding temporal window, leading to highly redundant computations. The recent Continual Transformers have addressed this issue, but they can only be effectively used in shallow models, which limits their scope and generalization power . In this paper, we propose the Deep Continual Transformer (DeepCoT), a redundancy-free encoder-only model that can be applied over existing deep encoder architectures with minimal changes. In our experiments over audio, video, and text streams, we show that DeepCoTs retain comparative performance to their non-continual baselines while offering a linear computational cost for all Transformer layers, which reduces up to two orders of magnitude in the running time compared to previous efficient models. RANSFORMER models [1] have shown impressive performance for a wide range of classification and regression tasks [2], [3]. However, their size has grown significantly as new complex tasks have been targeted, resulting in slower inference speeds. This problem is especially critical in applications where low-latency models are required, making the use of deep Transformer models unfeasible. Some applications such as robot perception impose limitations in the available hardware to perform predictions, further increasing the latency. Cloud solutions are not always possible due to privacy or practical constraints such as network delay or reliability. Moreover, there is an increasing sense of awareness regarding the high energy consumption required to run large Transformer-based models. One problem following with such characteristics is stream processing. Stream processing can be defined as the set of tasks in which new predictions are made by a model at specific intervals or on-demand, given new data inputs. These models normally benefit from leveraging past information together with the present data and rely on a sliding temporal window formed by the n most recent data points. Zhou, and Q. Zhang are with the Department of Electrical and Computer Engineering, Aarhus University, Denmark. A. Iosifidis is with the Data Science Research Centre, Tampere University, Finland.
arXiv.org Artificial Intelligence
Nov-25-2025
- Country:
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Information Technology (0.46)
- Technology: