Unifying Regression and Uncertainty Quantification with Contrastive Spectral Representation Learning
Abstract
In this work, we discuss a contrastive representation learning framework, called NCP, which introduces a new paradigm for training deep NN architectures for regression, enabling high-quality regression estimates and parametric uncertainty quantification without retraining or restrictive assumptions on the uncertainty distribution. NCP learns high-dimensional data representations that are linearly transferable to regression and uncertainty quantification tasks, backed by non-asymptotic statistical learning guarantees linking representation quality to downstream performance. Crucially, in equivariant regression contexts, the NCP framework can be adapted to train any geometric deep learning architecture, resulting in a disentangled equivariant representation learning algorithm with first-of-its-kind statistical guarantees for equivariant regression and symmetry-aware uncertainty quantification.