Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning and the Physical Sciences

CAPE: Channel-Attention-Based PDE Parameter Embeddings for SciML

Makoto Takamoto · Francesco Alesiani · Mathias Niepert


Abstract:

Scientific Machine Learning (SciML) is concerned with the development of machine learning methods for emulating physical systems governed by partial differential equations (PDE). ML-based surrogate models substitute inefficient and often non-differentiable numerical simulation algorithms and find multiple applications such as weather forecasting and molecular dynamics. While a number of ML-based methods for approximating the solutions of PDEs have been proposed in recent years, they typically do not consider the parameters of the PDEs, making it difficult for the ML surrogate models to generalize to PDE parameters not seen during training. We propose a new channel-attention-based parameter embedding (CAPE) component for scientific machine learning models. The CAPE module can be combined with any neural PDE solver allowing it to adapt to unseen PDE parameters without harming the original model’s performance. We compare CAPE using a PDE benchmark and obtain significant improvements over the base models. An implementation of the method and experiments are available at https://anonymous.4open.science/r/CAPE-ML4Sci-145

Chat is not available.