Goto

Collaborating Authors

 croma



CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders

Neural Information Processing Systems

A vital and rapidly growing application, remote sensing offers vast yet sparsely labeled, spatially aligned multimodal data; this makes self-supervised learning algorithms invaluable. We present CROMA: a framework that combines contrastive and reconstruction self-supervised objectives to learn rich unimodal and multimodal representations. Our method separately encodes masked-out multispectral optical and synthetic aperture radar samples--aligned in space and time--and performs cross-modal contrastive learning. Another encoder fuses these sensors, producing joint multimodal encodings that are used to predict the masked patches via a lightweight decoder. We show that these objectives are complementary when leveraged on spatially aligned multimodal data. We also introduce X-and 2D-ALiBi, which spatially biases our cross-and self-attention matrices. These strategies improve representations and allow our models to effectively extrapolate to images up to $17.6\times$ larger at test-time.



Mapping Rio de Janeiro's favelas: general-purpose vs. satellite-specific neural networks

Hallopeau, Thomas, Guérin, Joris, Demagistri, Laurent, Fouzai, Youssef, Gracie, Renata, De Matos, Vanderlei Pascoal, Gurgel, Helen, Dessay, Nadine

arXiv.org Artificial Intelligence

While deep learning methods for detecting informal settlements have already been developed, they have not yet fully utilized the potential offered by recent pretrained neural networks. We compare two types of pretrained neural networks for detecting the favelas of Rio de Janeiro: 1. Generic networks pretrained on large diverse datasets of unspecific images, 2. A specialized network pretrained on satellite imagery . While the latter is more specific to the target task, the former has been pretrained on significantly more images. Hence, this research investigates whether task specificity or data volume yields superior performance in urban informal settlement detection.


CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked Autoencoders

Neural Information Processing Systems

A vital and rapidly growing application, remote sensing offers vast yet sparsely labeled, spatially aligned multimodal data; this makes self-supervised learning algorithms invaluable. We present CROMA: a framework that combines contrastive and reconstruction self-supervised objectives to learn rich unimodal and multimodal representations. Our method separately encodes masked-out multispectral optical and synthetic aperture radar samples--aligned in space and time--and performs cross-modal contrastive learning. Another encoder fuses these sensors, producing joint multimodal encodings that are used to predict the masked patches via a lightweight decoder. We show that these objectives are complementary when leveraged on spatially aligned multimodal data.