Amortizing Gravitational Wave Inference with Transformers
Abstract
Deep learning is a powerful approach for scalable scientific inference. Once trained, neural networks can perform cheap data analysis, effectively amortizing the computational cost of training across many measurements. This addresses key efficiency and speed limitations of traditional inference methods. However, scientific data are often affected by artifacts such as missing or noise-contaminated segments, which many neural architectures cannot naturally accommodate. We here present a transformer-based framework that enables amortized inference even on incomplete data in gravitational-wave astronomy. We demonstrate accurate simulation-based inference on 48 real gravitational wave observations while robustly adapting to experimental conditions, including removal of low quality data and data from different detectors.