Skip to yearly menu bar Skip to main content


Oral session
in
Workshop: ImageNet: Past, Present, and Future

Spotlight talk: Learning Background Invariance Improves Generalization and Robustness in Self Supervised Learning on ImageNet and Beyond

Chaitanya Ryali


Abstract:

Unsupervised representation learning is an important challenge in computer vision. Recent progress in self-supervised learning has demonstrated promising results in multiple visual tasks. An important ingredient in high-performing self-supervised methods is the use of data augmentation by training models to place different augmented views of the same image nearby in embedding space. However, commonly used augmentation pipelines treat images holistically, ignoring the semantic relevance of parts of an image—e.g. a subject vs. a background—which can lead to the learning of spurious correlations. Our work addresses this problem by investigating a class of simple, yet highly effective “background augmentations", which encourage models to focus on semantically-relevant content by discouraging them from focusing on image backgrounds. Through a systematic, comprehensive investigation, we show that background augmentations lead to improved generalization with substantial improvements (~1-2% on ImageNet) in performance across a spectrum of state-of-the-art self-supervised methods (MoCo-v2, BYOL, SwAV) on a variety of tasks, even allowing us to reach within 0.1% of supervised performance on ImageNet. We also find improved label efficiency with even larger performance improvements in limited label settings (up to 4.2%). Further, we find improved training efficiency, attaining a benchmark accuracy of 74.4%, outperforming many recent self-supervised learning methods trained for 800-1000 epochs, in only 100 epochs. Importantly, we also demonstrate that background augmentations boost generalization and robustness to a number of out-of-distribution settings, including the Backgrounds Challenge, natural adversarial examples, adversarial attacks, ImageNet-Renditions and ImageNet ReaL. We also make progress in completely unsupervised saliency detection, in the process of generating saliency masks that we use for background augmentations.