ComptonINR: Implicit Neural Representations for Fast Modeling of Compton Telescope Point Spread Functions
Abstract
Compton telescopes enable the observation of the MeV universe; however, detection of gamma-ray sources requires the generation of the instrument response function, typically computed by intensive simulations on high-performance computers. We introduce ComptonINR, a small, coordinate-based neural network that learns the mapping between a point source location in the image space and the Compton camera’s measurements in the data space. ComptonINR’s continuous structure enables training on a small, simulated set of coarse resolution point spread functions (PSFs) instead of the whole response. Moreover, ComptonINR can interpolate PSFs during inference at multiple times higher resolution—this significantly reduces the response simulation requirements. The model accurately generalizes to untrained source locations, and source detection is achieved with 0.987 precision, 0.964 recall, and ~0.63° median angular error. ComptonINR reduces the required simulation time by roughly a factor of 70 and trains in ~53 minutes on a MacBook. Furthermore, ComptonINR scales favorably with event count, charting a realistic path towards high-resolution gamma-ray imaging on consumer hardware.