Goto

Collaborating Authors

 simple model-agnostic representation learning


Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited Modalities

Neural Information Processing Systems

Contrastive learning methods, such as CLIP, leverage naturally paired data--for example, images and their corresponding text captions--to learn general representations that transfer efficiently to downstream tasks. While such approaches are generally applied to two modalities, domains such as robotics, healthcare, and video need to support many types of data at once. We show that the pairwise application of CLIP fails to capture joint information between modalities, thereby limiting the quality of the learned representations. To address this issue, we present Symile, a simple contrastive learning approach that captures higher-order information between any number of modalities. Symile provides a flexible, architecture-agnostic objective for learning modality-specific representations.