Debiased Contrastive Learning
Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, Stefanie Jegelka
Spotlight presentation: Orals & Spotlights Track 01: Representation/Relational
on 2020-12-07T20:00:00-08:00 - 2020-12-07T20:10:00-08:00
on 2020-12-07T20:00:00-08:00 - 2020-12-07T20:10:00-08:00
Poster Session 1 (more posters)
on 2020-12-07T21:00:00-08:00 - 2020-12-07T23:00:00-08:00
GatherTown: Learning with limited supervision ( Town C0 - Spot B1 )
on 2020-12-07T21:00:00-08:00 - 2020-12-07T23:00:00-08:00
GatherTown: Learning with limited supervision ( Town C0 - Spot B1 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples. Without access to labels, dissimilar (negative) points are typically taken to be randomly sampled datapoints, implicitly accepting that these points may, in reality, actually have the same label. Perhaps unsurprisingly, we observe that sampling negative examples from truly different labels improves performance, in a synthetic setting where labels are available. Motivated by this observation, we develop a debiased contrastive objective that corrects for the sampling of same-label datapoints, even without knowledge of the true labels. Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks. Theoretically, we establish generalization bounds for the downstream classification task.