Decoupled Audio-Visual Dataset Distillation

Li, Wenyuan, Li, Guang, Maeda, Keisuke, Ogawa, Takahiro, Haseyama, Miki

arXiv.org Artificial Intelligence 

Audio-Visual Dataset Distillation aims to compress large-scale datasets into compact subsets while preserving the performance of the original data. However, conventional Distribution Matching (DM) methods struggle to capture intrinsic cross-modal alignment. Subsequent studies have attempted to introduce cross-modal matching, but two major challenges remain: (i) independently and randomly initialized encoders lead to inconsistent modality mapping spaces, increasing training difficulty; and (ii) direct interactions between modalities tend to damage modality-specific (private) information, thereby degrading the quality of the distilled data. T o address these challenges, we propose DA VDD, a pretraining-based decoupled audio-visual distillation framework. DA VDD leverages a diverse pre-trained bank to obtain stable modality features and uses a lightweight decoupler bank to disentangle them into common and private representations. T o effectively preserve cross-modal structure, we further introduce Common In-termodal Matching together with a Sample-Distribution Joint Alignment strategy, ensuring that shared representations are aligned both at the sample level and the global distribution level. Meanwhile, private representations are entirely isolated from cross-modal interaction, safeguarding modality-specific cues throughout distillation. Extensive experiments across multiple benchmarks show that DA VDD achieves state-of-the-art results under all IPC settings, demonstrating the effectiveness of decoupled representation learning for high-quality audio-visual dataset distillation.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found