Any-dimensional learning and PDEs, Soledad Villar
Soledad Villar
Abstract
Some machine learning models are defined on a fixed set of parameters but can be evaluated on inputs of arbitrary size. That's the case of graph neural networks and various machine learning models for point clouds and sets. The fundamental mathematical concept that allows models to keep their expressive power as their inputs grow is related to a concept in algebra known as representation stability. In this talk, I introduce the relevant mathematical concepts, explain their implications for size generalization, and explain how to apply them to learning PDEs in a coordinate-free and dimension-independent manner.
Video
Chat is not available.
Successful Page Load