Timezone: »
Correct-N-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations
Michael Zhang · Nimit Sohoni · Hongyang Zhang · Chelsea Finn · Christopher Ré
Event URL: https://openreview.net/forum?id=Q41kl_DwS3Y »
We propose Correct-N-Contrast (CNC), a contrastive learning method to improve robustness to spurious correlations when training group labels are unknown. Our motivating observation is that worst-group performance is related to a representation alignment loss, which measures the distance in feature space between different groups within each class. We prove that the gap between worst-group and average loss for each class is upper bounded by this alignment loss for that class. Thus, CNC aims to improve representation alignment via contrastive learning. First, CNC uses an ERM model to infer the group information. Second, with a careful sampling scheme, CNC trains a contrastive model to encourage similar representations for groups in the same class. We show that CNC significantly improves worst-group accuracy over existing state-of-the-art methods on popular benchmarks, e.g., achieving $7.7\%$ absolute lift in worst-group accuracy on the CelebA dataset, and performs almost as well as methods trained with group labels. CNC also learns better-aligned representations between different groups in each class, reducing the alignment loss substantially compared to prior methods.
We propose Correct-N-Contrast (CNC), a contrastive learning method to improve robustness to spurious correlations when training group labels are unknown. Our motivating observation is that worst-group performance is related to a representation alignment loss, which measures the distance in feature space between different groups within each class. We prove that the gap between worst-group and average loss for each class is upper bounded by this alignment loss for that class. Thus, CNC aims to improve representation alignment via contrastive learning. First, CNC uses an ERM model to infer the group information. Second, with a careful sampling scheme, CNC trains a contrastive model to encourage similar representations for groups in the same class. We show that CNC significantly improves worst-group accuracy over existing state-of-the-art methods on popular benchmarks, e.g., achieving $7.7\%$ absolute lift in worst-group accuracy on the CelebA dataset, and performs almost as well as methods trained with group labels. CNC also learns better-aligned representations between different groups in each class, reducing the alignment loss substantially compared to prior methods.
Author Information
Michael Zhang (Stanford University)
Nimit Sohoni (Stanford University)
Hongyang Zhang (Northeastern University)
Chelsea Finn (Stanford)
Christopher Ré (Stanford)
More from the Same Authors
-
2021 : Extending the WILDS Benchmark for Unsupervised Adaptation »
Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Ian Stavness · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang -
2021 : Test Time Robustification of Deep Models via Adaptation and Augmentation »
Marvin Zhang · Sergey Levine · Chelsea Finn -
2021 : The Reflective Explorer: Online Meta-Exploration from Offline Data in Realistic Robotic Tasks »
Rafael Rafailov · · Tianhe Yu · Avi Singh · Mariano Phielipp · Chelsea Finn -
2021 : Data Sharing without Rewards in Multi-Task Offline Reinforcement Learning »
Tianhe Yu · Aviral Kumar · Yevgen Chebotar · Chelsea Finn · Sergey Levine · Karol Hausman -
2021 : CoMPS: Continual Meta Policy Search »
Glen Berseth · Zhiwei Zhang · Grace Zhang · Chelsea Finn · Sergey Levine -
2021 : Discriminator Augmented Model-Based Reinforcement Learning »
Allan Zhou · Archit Sharma · Chelsea Finn -
2022 : Task Modeling: Approximating Multitask Predictions for Cross-Task Transfer »
Dongyue Li · Huy Nguyen · Hongyang Zhang -
2021 : Alex Ratner and Chris Re - The Future of Data Centric AI »
Christopher Ré -
2021 Poster: Improved Regularization and Robustness for Fine-tuning in Neural Networks »
Dongyue Li · Hongyang Zhang -
2021 Poster: Scatterbrain: Unifying Sparse and Low-rank Attention »
Beidi Chen · Tri Dao · Eric Winsor · Zhao Song · Atri Rudra · Christopher Ré -
2020 : Mini-panel discussion 3 - Prioritizing Real World RL Challenges »
Chelsea Finn · Thomas Dietterich · Angela Schoellig · Anca Dragan · Anusha Nagabandi · Doina Precup -
2020 : Keynote: Chelsea Finn »
Chelsea Finn -
2020 : Tree Covers: An Alternative to Metric Embeddings »
Roshni Sahoo · Ines Chami · Christopher Ré -
2020 Poster: No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems »
Nimit Sohoni · Jared Dunnmon · Geoffrey Angus · Albert Gu · Christopher Ré -
2019 : Coffee/Poster session 1 »
Shiro Takagi · Khurram Javed · Johanna Sommer · Amr Sharaf · Pierluca D'Oro · Ying Wei · Sivan Doveh · Colin White · Santiago Gonzalez · Cuong Nguyen · Mao Li · Tianhe Yu · Tiago Ramalho · Masahiro Nomura · Ahsan Alvi · Jean-Francois Ton · W. Ronny Huang · Jessica Lee · Sebastian Flennerhag · Michael Zhang · Abram Friesen · Paul Blomstedt · Alina Dubatovka · Sergey Bartunov · Subin Yi · Iaroslav Shcherbatyi · Christian Simon · Zeyuan Shang · David MacLeod · Lu Liu · Liam Fowl · Diego Mesquita · Deirdre Quillen -
2019 Poster: Language as an Abstraction for Hierarchical Deep Reinforcement Learning »
YiDing Jiang · Shixiang (Shane) Gu · Kevin Murphy · Chelsea Finn -
2016 Poster: Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much »
Bryan He · Christopher M De Sa · Ioannis Mitliagkas · Christopher Ré -
2016 Poster: Data Programming: Creating Large Training Sets, Quickly »
Alexander Ratner · Christopher M De Sa · Sen Wu · Daniel Selsam · Christopher Ré -
2015 : Hardware Trends for High Performance Analytics »
Christopher Ré -
2015 : Taking it Easy »
Christopher Ré -
2015 Spotlight: Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width »
Christopher M De Sa · Ce Zhang · Kunle Olukotun · Christopher Ré · Christopher Ré -
2015 Poster: Taming the Wild: A Unified Analysis of Hogwild-Style Algorithms »
Christopher M De Sa · Ce Zhang · Kunle Olukotun · Christopher Ré · Christopher Ré