Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning and the Physical Sciences

Using physics-informed regularization to improve extrapolation capabilities of neural networks

Ganesh Dasika · Laurent White


Abstract:

Neural-network-based surrogate models, which replace (parts of) a physics-based simulator, are attractive for their efficiency, yet they suffer from a lack of extrapolation capability. Focusing on the wave equation, we investigate the use of several physics-based regularization terms in the loss function as a way to increase the extrapolation accuracy, together with assessing the impact of a term that conditions the neural network to weakly satisfy the boundary conditions. These regularization terms do not require any labeled data. By gradually incorporating the regularization terms while training, we achieve a more than 5X reduction in extrapolation error compared to a baseline (i.e., physics-less) neural network that is trained with the same set of labeled data. We map out future research directions, and provide some insights about leveraging the trained neural-network state for devising sampling strategies.

Chat is not available.