In-Context Symmetries: Self-Supervised Learning through Contextual World Models

Neural Information Processing Systems 

Can incorporating context into self-supervised vision algorithms eliminate augmentation-based inductive priors and enable dynamic adaptation to varying task symmetries? This work suggests a positive answer to this question by proposing to enhance the current joint embedding architecture with a finite context -- an abstract representation of a task, containing a few demonstrations that inform about task-specific symmetries, as shown in Figure 2(c).

Similar Docs  Excel Report  more

TitleSimilaritySource
None found