It has become an important goal of machine learning to develop methods that are exactly (or approximately) equivariant to group actions. Equivariant functions obey relations like f(g x) = g f(x); that is, if the inputs x are transformed by group element g, then the outputs f(x) are correspondingly transformed. There are two different kinds of symmetries that can be encoded by these equivariances: active symmetries that are observed regularities in the laws of physics, and passive symmetries that arise from redundancies in the allowed representations of the physical objects. In the first category are the symmetries that lead to conservation of momentum, energy and angular momentum. In the second category are coordinate freedom, units equivariance, and gauge symmetry, among others. Passive symmetries always exist, even in situations in which the physical law is not actively symmetric. For example, the physics near the surface of the Earth is very strongly oriented (free objects fall in the down direction, usually), and yet the laws can be expressed in a perfectly coordinate-free way by making use of the local gravitational acceleration vector. The passive symmetries seem trivial, but they can lead naturally to the discovery of scalings, mechanistic structures, and missing geometric and dimensional quantities, even with very limited training data. Our conjecture is that enforcing passive symmetries in machine-learning models will improve generalization (both in and out of sample) in all areas of engineering and the natural sciences. In this talk we explain how to parameterize functions that satisfy (some) symmetries, using classical invariant theory.