Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: from Theory to Practice

AstroCLIP: Cross-Modal Pre-Training for Astronomical Foundation Models

Francois Lanusse · Liam Parker · Siavash Golkar · Alberto Bietti · Miles Cranmer · Michael Eickenberg · Geraud Krawezik · John McCabe · Ruben Ohana · Mariel Pettee · Bruno R├ęgaldo-Saint Blancard · Tiberiu Tesileanu · Kyunghyun Cho · Shirley Ho


Abstract:

We present AstroCLiP, a strategy to facilitate the construction of astronomical foundation models that bridge the gap between diverse astronomical observational modalities. We demonstrate that a cross-modal contrastive learning approach between images and spectra of galaxies yields highly informative embeddings of both modalities. In particular, we apply our method on multi-band images and spectrograms from the Dark Energy Spectroscopic Instrument (DESI), and show that: (1) these embeddings are well-aligned between modalities and can be used for accurate cross-modal searches, and (2) these embeddings encode valuable physical information about the galaxies - in particular redshift and stellar mass - that can be used to achieve competitive zero- and few- shot predictions without further finetuning. Additionally, in the process of developing our approach, we also construct a novel, transformer-based model and pretraining approach for galaxy spectra.

Chat is not available.