Predictions and Corrections: Neural Predictors with Solver-based Correction for ODEs and PDEs
Abstract
Neural predictors offer fast surrogates for simulating dynamical systems but suffer from error accumulation over long horizons. We study a unified predict-correct paradigm in which a learned one-step neural forecast is post-processed by a physics-based solver step ("correction"). The same idea applies to ordinary differential equations (ODEs) and partial differential equations (PDEs). We examine how training targets (losses) for the predictor, e.g., oracle RK4 or trapezoid/Crank-Nicolson (CN) targets-influence the quality of both the raw neural rollout and the corrected rollout. Experiments span classic ODEs (linear decay, Lorenz-63, Lorenz-96) and a 2D forced heat equation with homogeneous Dirichlet boundaries. Across settings, the correction step markedly improves stability and accuracy, often recovering near-reference solutions; the choice of training target shapes the magnitude of improvements and robustness. We release scripts and artifacts to reproduce all results.