Boundary-Augmented Neural Operators for Better Generalization to Unseen Geometries
Abstract
Neural operators have emerged as powerful machine learning models for solving parametric PDEs. For downstream tasks, it is often essential that trained models remain reliable on out-of-distribution samples. However, models trained exclusively on reference data, without additional guidance, can underperform when applied on inputs outside the training distribution. In addition, while boundary conditions can significantly impact PDE solutions, they are often overlooked in existing neural operator designs. In this work, we focus specifically on the challenge of geometry generalization in neural operators and introduce the Boundary-Augmented Neural Operator (BNO), a general framework that incorporates the interaction between the boundary and the full domain. We validate the proposed BNOs on an airfoil flap dataset and a new Poisson equation dataset, in comparison with existing neural operator architectures. In particular, we evaluate a special case of BNO that treats the boundary and full domain separately, thereby retaining efficiency by exploiting the lower dimensionality of boundaries. Our results show that BNOs achieve greater robustness to changes in discretization and point distributions while maintaining high computational efficiency. Furthermore, they handle diverse geometric and topological variations with improved generalization to unseen geometries.