Goto

Collaborating Authors

 codec





NVRC: Neural Video Representation Compression

Neural Information Processing Systems

Recent advances in implicit neural representation (INR)-based video coding havedemonstrated its potential to compete with both conventional and other learning-based approaches. With INR methods, a neural network is trained to overfit avideo sequence, with its parameters compressed to obtain a compact representationof the video content. However, although promising results have been achieved,the best INR-based methods are still out-performed by the latest standard codecs,such as VVC VTM, partially due to the simple model compression techniquesemployed. In this paper, rather than focusing on representation architectures, whichis a common focus in many existing works, we propose a novel INR-based videocompression framework, Neural Video Representation Compression (NVRC),targeting compression of the representation. Based on its novel quantization andentropy coding approaches, NVRC is the first framework capable of optimizing anINR-based video representation in a fully end-to-end manner for the rate-distortiontrade-off. To further minimize the additional bitrate overhead introduced by theentropy models, NVRC also compresses all the network, quantization and entropymodel parameters hierarchically.


iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder

Neural Information Processing Systems

It was estimated that the world produced $59 ZB$ ($5.9 \times 10^{13} GB$) of data in 2020, resulting in the enormous costs of both data storage and transmission. Fortunately, recent advances in deep generative models have spearheaded a new class of so-called neural compression algorithms, which significantly outperform traditional codecs in terms of compression ratio. Unfortunately, the application of neural compression garners little commercial interest due to its limited bandwidth; therefore, developing highly efficient frameworks is of critical practical importance. In this paper, we discuss lossless compression using normalizing flows which have demonstrated a great capacity for achieving high compression ratios. As such, we introduce iFlow, a new method for achieving efficient lossless compression. We first propose Modular Scale Transform (MST) and a novel family of numerically invertible flow transformations based on MST. Then we introduce the Uniform Base Conversion System (UBCS), a fast uniform-distribution codec incorporated into iFlow, enabling efficient compression.


Low-Bitrate Video Compression through Semantic-Conditioned Diffusion

Wang, Lingdong, Su, Guan-Ming, Kothandaraman, Divya, Huang, Tsung-Wei, Hajiesmaili, Mohammad, Sitaraman, Ramesh K.

arXiv.org Artificial Intelligence

Traditional video codecs optimized for pixel fidelity collapse at ultra-low bitrates and produce severe artifacts. This failure arises from a fundamental misalignment between pixel accuracy and human perception. We propose a semantic video compression framework named DiSCo that transmits only the most meaningful information while relying on generative priors for detail synthesis. The source video is decomposed into three compact modalities: a textual description, a spatiotemporally degraded video, and optional sketches or poses that respectively capture semantic, appearance, and motion cues. A conditional video diffusion model then reconstructs high-quality, temporally coherent videos from these compact representations. Temporal forward filling, token interleaving, and modality-specific codecs are proposed to improve multimodal generation and modality compactness. Experiments show that our method outperforms baseline semantic and traditional codecs by 2-10X on perceptual metrics at low bitrates.


Adapting Neural Audio Codecs to EEG

Kastrati, Ard, Lanzendörfer, Luca, Rigoni, Riccardo, Matilla, John Staib, Wattenhofer, Roger

arXiv.org Artificial Intelligence

EEG and audio are inherently distinct modalities, differing in sampling rate, channel structure, and scale. Yet, we show that pretrained neural audio codecs can serve as effective starting points for EEG compression, provided that the data are preprocessed to be suitable to the codec's input constraints. Using DAC, a state-of-the-art neural audio codec as our base, we demonstrate that raw EEG can be mapped into the codec's stride-based framing, enabling direct reuse of the audio-pretrained encoder-decoder. Even without modification, this setup yields stable EEG reconstructions, and fine-tuning on EEG data further improves fidelity and generalization compared to training from scratch. We systematically explore compression-quality trade-offs by varying residual codebook depth, codebook (vocabulary) size, and input sampling rate. To capture spatial dependencies across electrodes, we propose DAC-MC, a multi-channel extension with attention-based cross-channel aggregation and channel-specific decoding, while retaining the audio-pretrained initialization. Evaluations on the TUH Abnormal and Epilepsy datasets show that the adapted codecs preserve clinically relevant information, as reflected in spectrogram-based reconstruction loss and downstream classification accuracy.


DUO-TOK: Dual-Track Semantic Music Tokenizer for Vocal-Accompaniment Generation

Lin, Rui, Wu, Zhiyue, Le, Jiahe, Wang, Kangdi, Chen, Weixiong, Dai, Junyu, Jiang, Tao

arXiv.org Artificial Intelligence

Duo-Tok is a source-aware dual-codebook tokenizer for vocal-accompaniment music that targets the growing tension between reconstruction quality and language-model (LM) learnability in modern lyrics-to-song systems. Existing codecs either prioritize high-fidelity reconstruction with difficult-to-model acoustic tokens or compress aggressively into semantic tokens that are LM-friendly but lossy, and they rarely make the tokenizer itself aware of dual-track structure. Duo-Tok follows a four-stage, SSL-centered pipeline: we first pretrain a BEST-RQ-style encoder on large-scale audio, then stabilize and factorize the representation with Gaussian replacement noise and multi-task supervision, before freezing the encoder to learn SimVQ-based dual codebooks with hard routing for vocals and accompaniment, and finally training latent diffusion decoders on top of the discrete tokens. Duo-Tok at 0.75 kbps shifts the empirical reconstruction-generation Pareto frontier, achieving the best music-tagging AP and the lowest vocabulary-normalized LM perplexity among compared codecs while maintaining reconstruction quality comparable to state-of-the-art music tokenizers.