Scheduled Temporal Loss Weighting for Neural Operators
Oluwaseun Coker · Peter Jimack · Amirul Khan · He Wang
Abstract
Neural operators offer promise for efficient solutions to time-dependent partial differential equations, but face challenges in long-term prediction due to complex dynamics, gradient accumulation, and error propagation. To address these limitations, we propose a novel curriculum learning strategy, temporal weighted loss. This method mitigates overfitting to early dynamics by dynamically adjusting the weights applied to the loss across the temporal sequence, prioritising initial time steps during early training. This approach enhances model generalisation and prediction accuracy, demonstrating improved performance compared to baseline techniques.
Chat is not available.
Successful Page Load