Goto

Collaborating Authors

 block length


Block Transformer: Global-to-Local Language Modeling for Fast Inference

Neural Information Processing Systems

We introduce the Block Transformer which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks associated with self-attention. Self-attention requires the key-value (KV) cache of all previous sequences to be retrieved from memory at every decoding step to retrieve context information, leading to two primary bottlenecks during batch inference. First, there is a significant delay in obtaining the first token, as the information of the entire prompt must first be processed to prefill the KV cache.


228499b55310264a8ea0e27b6e7c6ab6-AuthorFeedback.pdf

Neural Information Processing Systems

Wewillrevisethe9 sentencestomakeitclear.10 2. Comparison to polar andLDPC codes: We will add more figures for comparisons in the revised version. While TurboAE doesnotoutperform traditional16 codes, achieving a comparable reliability underdiscretechannels requires a major breakthrough. This is because17 learning an autoencoder including anon-differentiable layer (binarization of code) makes training challenging. Wesee35 (relatively) less examples at the decision boundaries, making it hard to train an accurate decoder.





FLoC: Facility Location-Based Efficient Visual Token Compression for Long Video Understanding

Cho, Janghoon, Lee, Jungsoo, Hayat, Munawar, Hwang, Kyuwoong, Porikli, Fatih, Choi, Sungha

arXiv.org Artificial Intelligence

Recent studies in long video understanding have harnessed the advanced visual-language reasoning capabilities of Large Multimodal Models (LMMs), driving the evolution of video-LMMs specialized for processing extended video sequences. However, the scalability of these models is severely limited by the overwhelming volume of visual tokens generated from extended video sequences. To address this challenge, this paper proposes FLoC, an efficient visual token compression framework based on the facility location function, a principled approach that swiftly selects a compact yet highly representative and diverse subset of visual tokens within a predefined budget on the number of visual tokens. By integrating the lazy greedy algorithm, our method achieves remarkable efficiency gains by swiftly selecting a compact subset of tokens, drastically reducing the number of visual tokens while guaranteeing near-optimal performance. Notably, our approach is training-free, model-agnostic, and query-agnostic, providing a versatile solution that seamlessly integrates with diverse video-LLMs and existing workflows. Extensive evaluations on large-scale benchmarks, such as Video-MME, MLVU, and LongVideoBench, demonstrate that our framework consistently surpasses recent compression techniques, highlighting not only its effectiveness and robustness in addressing the critical challenges of long video understanding, but also its efficiency in processing speed.




CreditDecoding: Accelerating Parallel Decoding in Diffusion Large Language Models with Trace Credits

Wang, Kangyu, Jiang, Zhiyun, Feng, Haibo, Zhao, Weijia, Liu, Lin, Li, Jianguo, Lan, Zhenzhong, Lin, Weiyao

arXiv.org Artificial Intelligence

Diffusion large language models (dLLMs) generate text through iterative denoising steps, achieving parallel decoding by denoising only high-confidence positions at each step. However, existing approaches often repetitively remask tokens due to initially low confidence scores, leading to redundant iterations and limiting overall acceleration. Through the analysis of dLLM decoding traces, we observe that the model often determines the final prediction for a token several steps before the decoding step. To leverage this historical information and avoid redundant steps, we introduce the concept of Trace Credit, which quantifies each token's convergence potential by accumulating historical logits. Furthermore, we propose CreditDecoding, a training-free parallel decoding algorithm that accelerates the confidence convergence of correct but underconfident tokens by fusing current logits with Trace Credit. This process significantly reduces redundant iterations and enhances decoding robustness. On eight benchmarks, CreditDecoding achieves a 5.48 times speedup and a 0.48 performance improvement over LLaDA-8B-Instruct, and a 4.11 times speedup with a 0.15 performance improvement over LLaDA-MoE-Instruct. Importantly, CreditDecoding scales effectively to long sequences and is orthogonal to mainstream inference optimizations, making it a readily integrable and versatile solution.