Bridging Neural Operator and Flow Matching for a Generative PDE Foundation Model
Zituo Chen · Sili Deng
Abstract
Pretraining on large-scale collections of PDE-governed spatiotemporal trajectories has recently shown promise for building generalizable models of dynamical systems. Yet, most existing PDE foundation models rely on deterministic Transformer architectures, which demand substantial computational resources and lack generative flexibility. In contrast, generative models can capture uncertainty, making them well-suited for probabilistic forecasting, data assimilation, and scientific design. In this work, we introduce a generative PDE foundation model that bridges neural operator learning with flow matching. By jointly sampling the noise level and the physical timestep between adjacent states, the model learns a unified velocity field that transports a noisy current state toward its clean successor. Alongside the core framework, we introduce novel architectural strategies that achieve up to 15× greater computational efficiency than full-length diffusion models, enabling large-scale pretraining at substantially reduced cost. Our framework combines autoregressive Transformers and latent diffusion in a two-stage training pipeline, yielding scalable, accurate, and extensible generative modeling of PDE systems. We curate a training corpus of $\sim$2M trajectories across 12 distinct PDE families and release a suite of pretrained autoencoders and generative latent models of varying parameter scales. For downstream evaluation, we benchmark on previously unseen Kolmogorov turbulence with few-shot adaptation, and show long-term rollout stability of our model compared to its deterministic counterparts.
Chat is not available.
Successful Page Load