Skip to yearly menu bar Skip to main content


Poster

Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs

Md Ashiqur Rahman · Robert Joseph George · Mogab Elleithy · Daniel Leibovici · Zongyi Li · Boris Bonev · Colin White · Julius Berner · Raymond A. Yeh · Jean Kossaifi · Kamyar Azizzadenesheli · Animashree Anandkumar

East Exhibit Hall A-C #4102
[ ] [ Project Page ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs) due to complex geometries, interactions between physical variables, and the limited amounts of high-resolution training data. To address these issues, we propose Codomain Attention Neural Operator (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems. Specifically, we extend positional encoding, self-attention, and normalization layers to function spaces. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations, fluid-structure interactions, and Rayleigh-BĂ©nard convection, we found CoDA-NO to outperform existing methods by over 36%.

Live content is unavailable. Log in and register to view live content