Timezone: »

Learning Invariances in Neural Networks from Training Data
Gregory Benton · Marc Finzi · Pavel Izmailov · Andrew Wilson

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #799

Invariances to translations have imbued convolutional neural networks with powerful generalization properties. However, we often do not know a priori what invariances are present in the data, or to what extent a model should be invariant to a given augmentation. We show how to learn invariances by parameterizing a distribution over augmentations and optimizing the training loss simultaneously with respect to the network parameters and augmentation parameters. With this simple procedure we can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations, on training data alone. We show our approach is competitive with methods that are specialized to each task with the appropriate hard-coded invariances, without providing any prior knowledge of which invariance is needed.

Author Information

Gregory Benton (New York University)
Marc Finzi (New York University)
Pavel Izmailov (New York University)
Andrew Wilson (New York University)
Andrew Wilson

I am a professor of machine learning at New York University.

More from the Same Authors