NOMAD: Nonlinear Manifold Decoders for Operator Learning

Jacob Seidman · Georgios Kissas · Paris Perdikaris · George J. Pappas

Hall J #117

Keywords: [ PDEs ] [ Functional Data ] [ Operator Learning ] [ Nonlinear Dimension Reduction ] [ Manifold Learning ]

[ Abstract ]
[ Paper [ OpenReview
Thu 1 Dec 9 a.m. PST — 11 a.m. PST


Supervised learning in function spaces is an emerging area of machine learning research with applications to the prediction of complex physical systems such as fluid flows, solid mechanics, and climate modeling. By directly learning maps (operators) between infinite dimensional function spaces, these models are able to learn discretization invariant representations of target functions. A common approach is to represent such target functions as linear combinations of basis elements learned from data. However, there are simple scenarios where, even though the target functions form a low dimensional submanifold, a very large number of basis elements is needed for an accurate linear representation. Here we present NOMAD, a novel operator learning framework with a nonlinear decoder map capable of learning finite dimensional representations of nonlinear submanifolds in function spaces. We show this method is able to accurately learn low dimensional representations of solution manifolds to partial differential equations while outperforming linear models of larger size. Additionally, we compare to state-of-the-art operator learning methods on a complex fluid dynamics benchmark and achieve competitive performance with a significantly smaller model size and training cost.

Chat is not available.