High Efficiency Image Compression for Large Visual-Language Models

Li, Binzhe, Wang, Shurun, Wang, Shiqi, Ye, Yan

arXiv.org Artificial Intelligence 

--In recent years, large visual language models (L VLMs) have shown impressive performance and promising generalization capability in multi-modal tasks, thus replacing humans as receivers of visual information in various application scenarios. In this paper, we pioneer to propose a variable bitrate image compression framework consisting of a pre-editing module and an end-to-end codec to achieve promising rate-accuracy performance for different L VLMs. In particular, instead of optimizing an adaptive pre-editing network towards a particular task or several representative tasks, we propose a new optimization strategy tailored for L VLMs, which is designed based on the representation and discrimination capability with token-level distortion and rank. The pre-editing module and the variable bitrate end-to-end image codec are jointly trained by the losses based on semantic tokens of the large model, which introduce enhanced generalization capability for various data and tasks. Experimental results demonstrate that the proposed framework could efficiently achieve much better rate-accuracy performance compared to the state-of-the-art coding standard, V ersatile Video Coding. Meanwhile, experiments with multi-modal tasks have revealed the robustness and generalization capability of the proposed framework. ARGE visual-language models (L VLMs) have shown impressive success in a variety of multi-modal application domains. Images, which are typically featured with high data volume, are typically compressed for transmission before feeding to the L VLMs at the cloud end. Instead of supporting only a single task, L VLMs typically support multi-tasks simultaneously, which brings unprecedented challenges to image coding for machines [1]. In the past decades, as the default visual data communication solutions, existing image and video standards have been developed and facilitated to improve rate-distortion (RD) performance, such as H.264/A VC [2], H.265/HEVC [3], H.266/VVC [4], and A VS [5]. Inspired by the rapid development of deep neural networks, many learning-based image and video codecs are proposed [6]-[10], which have achieved comparable and even better RD performance compared with VVC [11], [12].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found