`

Timezone: »

 
Poster
Grounding inductive biases in natural images: invariance stems from variations in data
Diane Bouchacourt · Mark Ibrahim · Ari Morcos

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

To perform well on unseen and potentially out-of-distribution samples, it is desirable for machine learning models to have a predictable response with respect to transformations affecting the factors of variation of the input. Here, we study the relative importance of several types of inductive biases towards such predictable behavior: the choice of data, their augmentations, and model architectures. Invariance is commonly achieved through hand-engineered data augmentation, but do standard data augmentations address transformations that explain variations in real data? While prior work has focused on synthetic data, we attempt here to characterize the factors of variation in a real dataset, ImageNet, and study the invariance of both standard residual networks and the recently proposed vision transformer with respect to changes in these factors. We show standard augmentation relies on a precise combination of translation and scale, with translation recapturing most of the performance improvement---despite the (approximate) translation invariance built in to convolutional architectures, such as residual networks. In fact, we found that scale and translation invariance was similar across residual networks and vision transformer models despite their markedly different architectural inductive biases. We show the training data itself is the main source of invariance, and that data augmentation only further increases the learned invariances. Notably, the invariances learned during training align with the ImageNet factors of variation we found. Finally, we find that the main factors of variation in ImageNet mostly relate to appearance and are specific to each class.

Author Information

Diane Bouchacourt (Meta AI)
Mark Ibrahim (Facebook AI Research)

Mark Ibrahim is a senior machine learning engineer with a background in mathematics, deep learning, and knowledge graphs. He has worked on methods to interpret neural network predictions and applications of deep learning to forecasting. He enjoys good coffee, eating well, and editing text in Vim.

Ari Morcos (Facebook AI Research)

More from the Same Authors

  • 2021 : Learning Background Invariance Improves Generalization and Robustness in Self Supervised Learning on ImageNet and Beyond »
    Chaitanya Ryali · David Schwab · Ari Morcos
  • 2021 Poster: CrypTen: Secure Multi-Party Computation Meets Machine Learning »
    Brian Knott · Shobha Venkataraman · Awni Hannun · Shubho Sengupta · Mark Ibrahim · Laurens van der Maaten
  • 2020 Poster: A Benchmark for Systematic Generalization in Grounded Language Understanding »
    Laura Ruis · Jacob Andreas · Marco Baroni · Diane Bouchacourt · Brenden Lake
  • 2020 Poster: The Generalization-Stability Tradeoff In Neural Network Pruning »
    Brian Bartoldson · Ari Morcos · Adrian Barbu · Gordon Erlebacher
  • 2019 : Contributed Session - Spotlight Talks »
    Jonathan Frankle · David Schwab · Ari Morcos · Qianli Ma · Yao-Hung Hubert Tsai · Ruslan Salakhutdinov · YiDing Jiang · Dilip Krishnan · Hossein Mobahi · Samy Bengio · Sho Yaida · Muqiao Yang
  • 2019 : Lunch Break and Posters »
    Xingyou Song · Elad Hoffer · Wei-Cheng Chang · Jeremy Cohen · Jyoti Islam · Yaniv Blumenfeld · Andreas Madsen · Jonathan Frankle · Sebastian Goldt · Satrajit Chatterjee · Abhishek Panigrahi · Alex Renda · Brian Bartoldson · Israel Birhane · Aristide Baratin · Niladri Chatterji · Roman Novak · Jessica Forde · YiDing Jiang · Yilun Du · Linara Adilova · Michael Kamp · Berry Weinstein · Itay Hubara · Tal Ben-Nun · Torsten Hoefler · Daniel Soudry · Hsiang-Fu Yu · Kai Zhong · Yiming Yang · Inderjit Dhillon · Jaime Carbonell · Yanqing Zhang · Dar Gilboa · Johannes Brandstetter · Alexander R Johansen · Gintare Karolina Dziugaite · Raghav Somani · Ari Morcos · Alfredo Kalaitzis · Hanie Sedghi · Lechao Xiao · John Zech · Muqiao Yang · Simran Kaur · Qianli Ma · Yao-Hung Hubert Tsai · Ruslan Salakhutdinov · Sho Yaida · Zachary Lipton · Daniel Roy · Michael Carbin · Florent Krzakala · Lenka Zdeborová · Guy Gur-Ari · Ethan Dyer · Dilip Krishnan · Hossein Mobahi · Samy Bengio · Behnam Neyshabur · Praneeth Netrapalli · Kris Sankaran · Julien Cornebise · Yoshua Bengio · Vincent Michalski · Samira Ebrahimi Kahou · Md Rifat Arefin · Jiri Hron · Jaehoon Lee · Jascha Sohl-Dickstein · Samuel Schoenholz · David Schwab · Dongyu Li · Sang Keun Choe · Henning Petzka · Ashish Verma · Zhichao Lin · Cristian Sminchisescu
  • 2019 Poster: One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers »
    Ari Morcos · Haonan Yu · Michela Paganini · Yuandong Tian
  • 2018 : Posters and Open Discussions (see below for poster titles) »
    Ramya Malur Srinivasan · Miguel Perez · Yuanyuan Liu · Ben Wood · Dan Philps · Kyle Brown · Daniel Martin · Mykola Pechenizkiy · Luca Costabello · Rongguang Wang · Suproteem Sarkar · Sangwoong Yoon · Zhuoran Xiong · Enguerrand Horel · Zhu (Drew) Zhang · Ulf Johansson · Jonathan Kochems · Gregory Sidier · Prashant Reddy · Lana Cuthbertson · Yvonne Wambui · Christelle Marfaing · Galen Harrison · Irene Unceta Mendieta · Thomas Kehler · Mark Weber · Li Ling · Ceena Modarres · Abhinav Dhall · Arash Nourian · David Byrd · Ajay Chander · Xiao-Yang Liu · Hongyang Yang · Shuang (Sophie) Zhai · Freddy Lecue · Sirui Yao · Rory McGrath · Artur Garcez · Vangelis Bacoyannis · Alexandre Garcia · Lukas Gonon · Mark Ibrahim · Melissa Louie · Omid Ardakanian · Cecilia Sönströd · Kojin Oshiba · Chaofan Chen · Suchen Jin · aldo pareja · Toyo Suzumura
  • 2018 Poster: Insights on representational similarity in neural networks with canonical correlation »
    Ari Morcos · Maithra Raghu · Samy Bengio