Scalable Inference for LArTPC Signal Processing with MobileU-Net and Overlap–Tile Chunking
Abstract
Liquid Argon Time Projection Chambers (LArTPCs) record ionization charge on wire planes over time, producing high–granularity wire–time images that undergo noise filtering and deconvolution to recover charge signals for downstream reconstruction. Deep neural network region-of-interest (DNN–ROI) finding reframes ROI selection as semantic segmentation and has been shown to gate expensive processing effectively. However, the reference UNet–based DNN–ROI is memory–hungry and slow on CPUs, which remain the dominant resource for production processing in many HEP workflows. Prior work made inference feasible by rebinning the input by a factor of ten, reducing compute at the expense of spatio–temporal resolution.We present a scalable DNN–ROI variant that retains full input resolution while substantially reducing memory and compute. First, we adopt a lightweight MobileNet encoder within a UNet decoder (MobileUNet) to cut parameters and floating–point operations. Second, we replace global downsampling with overlapping chunking (overlap–tile sliding–window inference with halo margins), enabling bounded–memory processing of large wire–time images without seam artifacts. On representative LArTPC data, the approach scales to larger inputs while largely maintaining ROI–finding performance relative to the original model. We detail methods and quantitative evaluations below.