Timezone: »
It has become an important goal of machine learning to develop methods that are exactly (or approximately) equivariant to group actions. Equivariant functions obey relations like f(g x) = g f(x); that is, if the inputs x are transformed by group element g, then the outputs f(x) are correspondingly transformed. There are two different kinds of symmetries that can be encoded by these equivariances: active symmetries that are observed regularities in the laws of physics, and passive symmetries that arise from redundancies in the allowed representations of the physical objects. In the first category are the symmetries that lead to conservation of momentum, energy and angular momentum. In the second category are coordinate freedom, units equivariance, and gauge symmetry, among others. Passive symmetries always exist, even in situations in which the physical law is not actively symmetric. For example, the physics near the surface of the Earth is very strongly oriented (free objects fall in the down direction, usually), and yet the laws can be expressed in a perfectly coordinatefree way by making use of the local gravitational acceleration vector. The passive symmetries seem trivial, but they can lead naturally to the discovery of scalings, mechanistic structures, and missing geometric and dimensional quantities, even with very limited training data. Our conjecture is that enforcing passive symmetries in machinelearning models will improve generalization (both in and out of sample) in all areas of engineering and the natural sciences. In this talk we explain how to parameterize functions that satisfy (some) symmetries, using classical invariant theory.
Author Information
Soledad Villar (Johns Hopkins University)
More from the Same Authors

2021 : A simple equivariant machine learning method for dynamics based on scalars »
Weichi Yao · Kate StoreyFisher · David W Hogg · Soledad Villar 
2021 : Constraints with Doug Burger, Alysson Muotri, RalphEtienneCummings, Florian Engert »
Doug Burger · Florian Engert · Ralph EtienneCummings · Soledad Villar · Teresa Huang 
2022 : SE(3)equivariant selfattention via invariant features »
Nan Chen · Soledad Villar 
2022 : From Local to Global: SpectralInspired Graph Neural Networks »
Ningyuan Huang · Soledad Villar · Carey E Priebe · Da Zheng · Chengyue Huang · Lin Yang · Vladimir Braverman 
2023 Poster: Finegrained Expressivity of Graph Neural Networks »
Jan BĂ¶ker · Ron Levie · Ningyuan Huang · Soledad Villar · Christopher Morris 
2023 Poster: Approximately Equivariant Graph Networks »
Ningyuan Huang · Ron Levie · Soledad Villar 
2021 : Introduction »
Weiwei Yang · Joshua T Vogelstein · Onyema Osuagwu · Soledad Villar · Johnathan Flowers · Weishung Liu · Ronan Perry · Kaleab Alemayehu Kinfu · Teresa Huang