Overleaf Example
–Neural Information Processing Systems
Baseline Methods As standard baselines, we first consider zero-shot CLIP (ZS) and vanilla fine-tuning (FT) with contrastive loss. We construct the label map for contrastive loss by regarding all of the samples from a class as positives. A.3 Multi-modal Classification Dataset To evaluate the multi-modal representation learning under video emotional classification, CMU-MOSEI consists of three modalities textual (T), visual (V), and audio (A), and contains 23,453 Y ouTube video clips about diverse movie reviews, and each clip is annotated with ordinal labels ranging from -3 (strong negative) to 3 (strong positive). While MulT learns the joint encoder only with standard classification loss (i.e., cross-entropy loss; Metric For image-text retrieval, we adopt top-1 and top-5 recalls likewise CLIP retrieval setup. Flickr30k for zero-shot transferred image-text retrieval.
Neural Information Processing Systems
Oct-9-2025, 03:34:43 GMT