HybridToken-VLM: Hybrid Token Compression for Vision-Language Models
Zhang, Jusheng, Guo, Xiaoyang, Cai, Kaitong, Lv, Qinhan, Fan, Yijia, Chai, Wenhao, Wang, Jian, Wang, Keze
–arXiv.org Artificial Intelligence
Vision-language models (VLMs) have transformed multimodal reasoning, but feeding hundreds of visual patch tokens into LLMs incurs quadratic computational costs, straining memory and context windows. Traditional approaches face a trade-off: continuous compression dilutes high-level semantics such as object identities, while discrete quantization loses fine-grained details such as textures. We introduce HTC-VLM, a hybrid framework that disentangles semantics and appearance through dual channels, i.e., a continuous pathway for fine-grained details via ViT patches and a discrete pathway for symbolic anchors using MGVQ quantization projected to four tokens. These are fused into a 580-token hybrid sequence and compressed into a single voco token via a disentanglement attention mask and bottleneck, ensuring efficient and grounded representations. HTC-VLM achieves an average performance retention of 87.2 percent across seven benchmarks (GQA, VQAv2, MMBench, MME, POPE, SEED-Bench, ScienceQA-Image), outperforming the leading continuous baseline at 81.0 percent with a 580-to-1 compression ratio. Attention analyses show that the compressed token prioritizes the discrete anchor, validating its semantic guidance. Our work demonstrates that a minimalist hybrid design can resolve the efficiency-fidelity dilemma and advance scalable VLMs.
arXiv.org Artificial Intelligence
Dec-10-2025
- Country:
- Europe > Austria
- Vienna (0.14)
- North America > United States (0.04)
- Europe > Austria
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.93)
- Natural Language > Large Language Model (0.89)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence