Timezone: »
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions, and is more stable to hyperparameter settings such as optimizers and data augmentations. In reduced data settings, it outperforms cross-entropy significantly. Our loss function is simple to implement and reference TensorFlow code is released at https://t.ly/supcon.
Author Information
Prannay Khosla (Google LLC)
Piotr Teterwak (Google)
Chen Wang (Google)
Aaron Sarna (Google)
Yonglong Tian (MIT)
Phillip Isola (Massachusetts Institute of Technology)
Aaron Maschinot (Google Research)
Ce Liu (Google)
Dilip Krishnan (Google)
More from the Same Authors
-
2020 Poster: What Makes for Good Views for Contrastive Learning? »
Yonglong Tian · Chen Sun · Ben Poole · Dilip Krishnan · Cordelia Schmid · Phillip Isola -
2020 Session: Orals & Spotlights Track 07: Vision Applications »
Ce Liu · Natalia Neverova -
2019 Poster: Adversarial Robustness through Local Linearization »
Chongli Qin · James Martens · Sven Gowal · Dilip Krishnan · Krishnamurthy Dvijotham · Alhussein Fawzi · Soham De · Robert Stanforth · Pushmeet Kohli -
2019 Poster: Learning to Control Self-Assembling Morphologies: A Study of Generalization via Modularity »
Deepak Pathak · Christopher Lu · Trevor Darrell · Phillip Isola · Alexei Efros -
2019 Spotlight: Learning to Control Self-Assembling Morphologies: A Study of Generalization via Modularity »
Deepak Pathak · Christopher Lu · Trevor Darrell · Phillip Isola · Alexei Efros -
2018 Poster: Large Margin Deep Networks for Classification »
Gamaleldin Elsayed · Dilip Krishnan · Hossein Mobahi · Kevin Regan · Samy Bengio -
2016 Poster: Domain Separation Networks »
Konstantinos Bousmalis · George Trigeorgis · Nathan Silberman · Dilip Krishnan · Dumitru Erhan -
2011 Poster: Understanding the Intrinsic Memorability of Images »
Phillip Isola · Devi Parikh · Antonio Torralba · Aude Oliva