Pre-training with Random Orthogonal Projection Image Modeling

Haghighat, Maryam, Moghadam, Peyman, Mohamed, Shaheer, Koniusz, Piotr

arXiv.org Artificial Intelligence 

Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn structural information about objects and scenes. The intermediate feature representations obtained from MIM are suitable for fine-tuning on downstream tasks. In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM. Our proposed Random Orthogonal Projection Image Modeling (ROPIM) reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees. Since ROPIM uses a random subspace for the projection that realizes the masking step, the readily available complement of the subspace can be used during unmasking to promote recovery of removed information. In this paper, we show that using random orthogonal projection leads to superior performance compared to crop-based masking. We demonstrate state-of-the-art results on several popular benchmarks. ROPIM achieves a higher accuracy (see et al., 2022; Xie et al., 2022) mainly apply also GPL-ROPIM) with a lower training time. The masking in the spatial domain by randomly excluding blue and yellow regions indicate fast methods and image patches. ROPIM has are highly correlated within their spatial neighbourhood, both high accuracy and is fast (the green region). Existing MIM approaches typically replace a random set of input tokens with a special learnable symbol, called MASK, and aim to recover either masked image pixels (He et al., 2022; Xie et al., 2022), masked content features (Wei et al., 2022) or latent representations (Baevski et al., 2022).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found