Skip to yearly menu bar Skip to main content


Poster

Learning Infinitesimal Generators of Continuous Symmetries from Data

Gyeonghoon Ko · Hyunsu Kim · Juho Lee

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Exploiting symmetry inherent in data can significantly improve the sample efficiency of a learning procedure and the generalization of learned models. When data clearly reveals underlying symmetry, leveraging this symmetry can naturally inform the design of model architecture or learning strategies. Yet, in numerous real-world scenarios, identifying the specific symmetry within a given data distribution often proves ambiguous. To tackle this, some existing works learn symmetry in a data-driven manner, parameterizing and learning expected symmetry through data. However, they often rely on explicit knowledge, such as pre-defined generators, which are typically restricted to linear or affine transformations. In this paper, we propose a novel symmetry learning algorithm based on transformations defined with one-parameter groups. Our method is built on inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators. To learn these symmetries, we introduce a notion of validity score that computes the invariance of a target function with respect to a transform of interest. The validity score is fully differentiable and easily computable given a pre-trained neural network or target PDE, enabling effective search for transformations that achieve symmetries innate in the data. We apply our method mainly in two domains: image data and partial differential equations, and also demonstrate its advantages.

Live content is unavailable. Log in and register to view live content