AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models

Lanusse, Francois, Parker, Liam, Golkar, Siavash, Cranmer, Miles, Bietti, Alberto, Eickenberg, Michael, Krawezik, Geraud, McCabe, Michael, Ohana, Ruben, Pettee, Mariel, Blancard, Bruno Regaldo-Saint, Tesileanu, Tiberiu, Cho, Kyunghyun, Ho, Shirley

arXiv.org Artificial Intelligence 

We present AstroCLIP, a strategy to facilitate the construction of astronomical foundation models that bridge the gap between diverse observational modalities. We demonstrate that a cross-modal contrastive learning approach between images and optical spectra of galaxies yields highly informative embeddings of both modalities. In particular, we apply our method on multi-band images and optical spectra from the Dark Energy Spectroscopic Instrument (DESI), and show that: (1) these embeddings are well-aligned between modalities and can be used for accurate cross-modal searches, and (2) these embeddings encode valuable physical information about the galaxies -- in particular redshift and stellar mass -- that can be used to achieve competitive zero- and few- shot predictions without further finetuning. Additionally, in the process of developing our approach, we also construct a novel, transformer-based model and pretraining approach for processing galaxy spectra.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found