Investigating PDE Residual Attentions in Frequency Space for Diffusion Neural Operators
Medha Sawhney · Abhilash Neog · Mridul Khurana · Arka Daw · Anuj Karpatne
Abstract
Diffusion models for solving partial differential equations (PDEs) are gaining rapid attention, with many approaches using PDE residuals as loss guidance during test-time optimization. While effective under sparse and noisy observations, these frameworks face key limitations such as slow inference, optimization instabilities, and reliance on knowing the noise structure in the observations as a binary mask during inference. To overcome these limitations, we propose PRISMA (PDE Residual Informed Spectral Modulation with Attention), a conditional diffusion neural operator that informs the architecture of diffusion models with PDE residuals via gated attention mechanisms. In contrast to baselines, PRISMA does not require sensitive hyperparameter tuning of loss terms during training or inference, is mask-free, and is aware of the spatial and spectral distributions of PDE residuals. Over an extensive set of four benchmark PDEs with high (97\%) noise settings, we show that PRISMA matches or exceeds baseline accuracy while using 10$\times$ to 100$\times$ fewer denoising steps (20 vs. 200/2000) and achieving 6-7$\times$ faster inference speed than state-of-the-art diffusion models.
Chat is not available.
Successful Page Load