Nsutezo, Simone Fobi
FLAVARS: A Multimodal Foundational Language and Vision Alignment Model for Remote Sensing
Corley, Isaac, Nsutezo, Simone Fobi, Ortiz, Anthony, Robinson, Caleb, Dodhia, Rahul, Ferres, Juan M. Lavista, Najafirad, Peyman
Remote sensing imagery is dense with objects and contextual visual information. There is a recent trend to combine paired satellite images and text captions for pretraining performant encoders for downstream tasks. However, while contrastive image-text methods like CLIP enable vision-language alignment and zero-shot classification ability, vision-only downstream performance tends to degrade compared to image-only pretraining, such as MAE. In this paper, we propose FLAVARS, a pretraining method that combines the best of both contrastive learning and masked modeling, along with geospatial alignment via contrastive location encoding. We find that FLAVARS significantly outperforms a baseline of SkyCLIP for vision-only tasks such as KNN classification and semantic segmentation, +6\% mIOU on SpaceNet1, while retaining the ability to perform zero-shot classification, unlike MAE pretrained methods.
Rapid building damage assessment workflow: An implementation for the 2023 Rolling Fork, Mississippi tornado event
Robinson, Caleb, Nsutezo, Simone Fobi, Ortiz, Anthony, Sederholm, Tina, Dodhia, Rahul, Birge, Cameron, Richards, Kasie, Pitcher, Kris, Duarte, Paulo, Ferres, Juan M. Lavista
Rapid and accurate building damage assessments from high-resolution satellite imagery following a natural disaster is essential to inform and optimize first responder efforts. However, performing such building damage assessments in an automated manner is non-trivial due to the challenges posed by variations in disaster-specific damage, diversity in satellite imagery, and the dearth of extensive, labeled datasets. To circumvent these issues, this paper introduces a human-in-the-loop workflow for rapidly training building damage assessment models after a natural disaster. This article details a case study using this workflow, executed in partnership with the American Red Cross during a tornado event in Rolling Fork, Mississippi in March, 2023. The output from our human-in-the-loop modeling process achieved a precision of 0.86 and recall of 0.80 for damaged buildings when compared to ground truth data collected post-disaster. This workflow was implemented end-to-end in under 2 hours per satellite imagery scene, highlighting its potential for real-time deployment.
Mask Conditional Synthetic Satellite Imagery
Le, Van Anh, Reddy, Varshini, Chen, Zixi, Li, Mengyuan, Tang, Xinran, Ortiz, Anthony, Nsutezo, Simone Fobi, Robinson, Caleb
In this paper we propose a mask-conditional synthetic image generation model for creating synthetic satellite imagery datasets. Given a dataset of real high-resolution images and accompanying land cover masks, we show that it is possible to train an upstream conditional synthetic imagery generator, use that generator to create synthetic imagery with the land cover masks, then train a downstream model on the synthetic imagery and land cover masks that achieves similar test performance to a model that was trained with the real imagery. Further, we find that incorporating a mixture of real and synthetic imagery acts as a data augmentation method, producing better models than using only real imagery (0.5834 vs. 0.5235 mIoU). Finally, we find that encouraging diversity of outputs in the upstream model is a necessary component for improved downstream task performance. We have released code for reproducing our work on GitHub.