Contrastive Learning with Latent Tension Regularization for Tight Orbits
Abstract
In self-supervised contrastive learning, multiple augmentations of the same input naturally form a set of latent representations, or an orbit. Ideally, these representations should remain compact and directionally consistent under transformations. Standard methods such as SimCLR prioritize separating different samples but do not explicitly enforce intra-orbit coherence, allowing augmented views of the same input to drift in latent space. We propose Orbit Regularization Loss (ORL), a lightweight extension to the Normalized Temperature-scaled Cross-Entropy (NT-Xent) loss that reweights negative pairs based on a tension score - a measure of alignment between the positive-pair direction and the candidate negative’s displacement. This encourages augmented views to align along stable latent directions, reducing orbit spread without architectural changes or additional supervision. For now, ORL is aimed at improving the geometric structure of embeddings, rather than directly targeting downstream classification accuracy. Experiments on MNIST and CIFAR-10 show that ORL lowers intra-orbit variance, improves directional consistency, and yields a more coherent latent space geometry compared to the NT-Xent baseline.