Timezone: »
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm, which learns representations by pushing positive pairs, or similar examples from the same class, closer together while keeping negative pairs far apart. Despite the empirical successes, theoretical foundations are limited -- prior analyses assume conditional independence of the positive pairs given the same class label, but recent empirical applications use heavily correlated positive pairs (i.e., data augmentations of the same image). Our work analyzes contrastive learning without assuming conditional independence of positive pairs using a novel concept of the augmentation graph on data. Edges in this graph connect augmentations of the same data, and ground-truth classes naturally form connected sub-graphs. We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective on neural net representations. Minimizing this objective leads to features with provable accuracy guarantees under linear probe evaluation. By standard generalization bounds, these accuracy guarantees also hold when minimizing the training contrastive loss. In all, this work provides the first provable analysis for contrastive learning where the guarantees can apply to realistic empirical settings.
Author Information
Jeff Z. HaoChen (Stanford University)
Colin Wei (Stanford University)
Adrien Gaidon (Toyota Research Institute)
Tengyu Ma (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss »
Wed. Dec 8th 08:30 -- 10:00 AM Room
More from the Same Authors
-
2021 Spotlight: Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning »
Colin Wei · Sang Michael Xie · Tengyu Ma -
2021 : Calibrated Ensembles: A Simple Way to Mitigate ID-OOD Accuracy Tradeoffs »
Ananya Kumar · Aditi Raghunathan · Tengyu Ma · Percy Liang -
2021 : Self-supervised Learning is More Robust to Dataset Imbalance »
Hong Liu · Jeff Z. HaoChen · Adrien Gaidon · Tengyu Ma -
2021 : Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification »
Ling Pan · Longbo Huang · Tengyu Ma · Huazhe Xu -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2022 : DrML: Diagnosing and Rectifying Vision Models using Language »
Yuhui Zhang · Jeff Z. HaoChen · Shih-Cheng Huang · Kuan-Chieh Wang · James Zou · Serena Yeung -
2022 : DrML: Diagnosing and Rectifying Vision Models using Language »
Yuhui Zhang · Jeff Z. HaoChen · Shih-Cheng Huang · Kuan-Chieh Wang · James Zou · Serena Yeung -
2022 : Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning? »
Gunshi Gupta · Tim G. J. Rudner · Rowan McAllister · Adrien Gaidon · Yarin Gal -
2022 Poster: Amortized Proximal Optimization »
Juhan Bae · Paul Vicol · Jeff Z. HaoChen · Roger Grosse -
2022 Poster: Statistically Meaningful Approximation: a Case Study on Approximating Turing Machines with Transformers »
Colin Wei · Yining Chen · Tengyu Ma -
2022 Poster: Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations »
Jeff Z. HaoChen · Colin Wei · Ananya Kumar · Tengyu Ma -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization Q&A »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 : DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Tengyu Ma · Aaron Courville · George Tucker · Sergey Levine -
2021 Poster: Label Noise SGD Provably Prefers Flat Global Minimizers »
Alex Damian · Tengyu Ma · Jason Lee -
2021 Poster: Learning Barrier Certificates: Towards Safe Reinforcement Learning with Zero Training-time Violations »
Yuping Luo · Tengyu Ma -
2021 Poster: Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration »
Shengjia Zhao · Michael Kim · Roshni Sahoo · Tengyu Ma · Stefano Ermon -
2021 Poster: Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning »
Colin Wei · Sang Michael Xie · Tengyu Ma -
2021 Poster: Safe Reinforcement Learning by Imagining the Near Future »
Garrett Thomas · Yuping Luo · Tengyu Ma -
2021 Poster: Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve Optimism, Embrace Virtual Curvature »
Kefan Dong · Jiaqi Yang · Tengyu Ma -
2020 Poster: Self-training Avoids Using Spurious Features Under Domain Shift »
Yining Chen · Colin Wei · Ananya Kumar · Tengyu Ma -
2019 : Tengyu Ma, "Designing Explicit Regularizers for Deep Models" »
Tengyu Ma -
2019 Poster: Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss »
Kaidi Cao · Colin Wei · Adrien Gaidon · Nikos Arechiga · Tengyu Ma -
2019 Poster: Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel »
Colin Wei · Jason Lee · Qiang Liu · Tengyu Ma -
2019 Spotlight: Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel »
Colin Wei · Jason Lee · Qiang Liu · Tengyu Ma -
2019 Poster: Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation »
Colin Wei · Tengyu Ma -
2019 Poster: Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks »
Yuanzhi Li · Colin Wei · Tengyu Ma -
2019 Spotlight: Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation »
Colin Wei · Tengyu Ma -
2019 Spotlight: Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks »
Yuanzhi Li · Colin Wei · Tengyu Ma -
2018 : Poster Session »
Sujay Sanghavi · Vatsal Shah · Yanyao Shen · Tianchen Zhao · Yuandong Tian · Tomer Galanti · Mufan Li · Gilad Cohen · Daniel Rothchild · Aristide Baratin · Devansh Arpit · Vagelis Papalexakis · Michael Perlmutter · Ashok Vardhan Makkuva · Pim de Haan · Yingyan Lin · Wanmo Kang · Cheolhyoung Lee · Hao Shen · Sho Yaida · Dan Roberts · Nadav Cohen · Philippe Casgrain · Dejiao Zhang · Tengyu Ma · Avinash Ravichandran · Julian Emilio Salazar · Bo Li · Davis Liang · Christopher Wong · Glen Bigan Mbeng · Animesh Garg -
2017 : Panel Discussion »
Gregory Kahn · Ramesh Sarukkai · Adrien Gaidon · Sertac Karaman -
2017 : Deep Learning and photo-realistic Simulation for real-world Machine Intelligence in Autonomous Driving, Adrien Gaidon (TRI) »
Adrien Gaidon -
2017 Poster: On the Optimization Landscape of Tensor Decompositions »
Rong Ge · Tengyu Ma -
2017 Oral: On the Optimization Landscape of Tensor Decompositions »
Rong Ge · Tengyu Ma -
2016 Oral: Matrix Completion has No Spurious Local Minimum »
Rong Ge · Jason Lee · Tengyu Ma -
2016 Poster: Matrix Completion has No Spurious Local Minimum »
Rong Ge · Jason Lee · Tengyu Ma -
2016 Poster: A Non-generative Framework and Convex Relaxations for Unsupervised Learning »
Elad Hazan · Tengyu Ma -
2015 Poster: Sum-of-Squares Lower Bounds for Sparse PCA »
Tengyu Ma · Avi Wigderson -
2014 Poster: On Communication Cost of Distributed Statistical Estimation and Dimensionality »
Ankit Garg · Tengyu Ma · Huy Nguyen -
2014 Oral: On Communication Cost of Distributed Statistical Estimation and Dimensionality »
Ankit Garg · Tengyu Ma · Huy Nguyen