Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Mathematics of Modern Machine Learning (M3L)

Benefits of learning with symmetries: eigenvectors, graph representations and sample complexity

Stefanie Jegelka


Abstract:

In many applications, especially in the sciences, data and tasks have known invariances. Encoding such invariances directly into a machine learning model can improve learning outcomes, while it also poses challenges on efficient model design. In the first part of the talk, we will focus on the invariances relevant to eigenvectors and eigenspaces being inputs to a neural network. Such inputs are important, for instance, for graph representation learning or orthogonally equivariant learning. We will discuss targeted architectures that can universally express functions with the relevant invariances or equivariances - sign flips and changes of basis - and their theoretical and empirical benefits. Second, we will take a broader theoretical perspective. Empirically, it is known that encoding invariances into the machine learning model can reduce sample complexity. For the simplified setting of kernel ridge regression or random features, we will discuss new bounds that illustrate two ways in which invariances can reduce sample complexity. Our results hold for learning on manifolds and for invariances to a wide range of group actions.

This talk is based on joint work with Joshua Robinson, Derek Lim, Behrooz Tahmasebi, Lingxiao Zhao, Tess Smidt, Suvrit Sra and Haggai Maron.

Chat is not available.