MOVER: Multimodal Optimal Transport with Volume-based Embedding Regularization
–arXiv.org Artificial Intelligence
Recent advances in multimodal learning have largely relied on pairwise contrastive objectives to align different modalities, such as text, video, and audio, in a shared embedding space. While effective in bi-modal setups, these approaches struggle to generalize across multiple modalities and often lack semantic structure in high-dimensional spaces. In this paper, we propose MOVER, a novel framework that combines optimal transport-based soft alignment with volume-based geometric regularization to build semantically aligned and structured multimodal representations. By integrating a transport-guided matching mechanism with a geometric volume minimization objective (GAVE), MOVER encourages consistent alignment across all modalities in a modality-agnostic manner. Experiments on text-video-audio retrieval tasks demonstrate that MOVER significantly outperforms prior state-of-the-art methods in both zero-shot and finetuned settings. Additional analysis shows improved generalization to unseen modality combinations and stronger structural consistency in the learned embedding space.
arXiv.org Artificial Intelligence
Aug-19-2025
- Country:
- Asia
- China > Hebei Province (0.04)
- South Korea > Seoul
- Seoul (0.05)
- North America > United States
- New York > New York County > New York City (0.04)
- Asia
- Genre:
- Research Report > Promising Solution (0.34)
- Technology: