Self-Supervised Learning from Structural Invariance
Yipeng Zhang · Hafez Ghaemi · Jungyoon Lee · Laurent Charlin
Abstract
Joint-embedding self-supervised learning (SSL) learns from invariances between semantically-related data pairs. We study the one-to-many mapping problem in SSL, where each datum may be mapped to multiple valid targets. We show that existing methods struggle to flexibly capture this conditional uncertainty. As a remedy, we introduce a variational distribution that models this uncertainty in the latent space, and derive a lower bound on the pairwise mutual information. We also propose a simpler variant of the same idea using sparsity regularization. Our model, AdaSSL, is applicable to both contrastive and predictive SSL methods and we empirically show its superiority on numerical data, images, and videos.
Chat is not available.
Successful Page Load