Skip to yearly menu bar Skip to main content


Poster

Efficiency for Free: Ideal Data Are Transportable Representations

PENG SUN · Yi Jiang · Tao Lin

East Exhibit Hall A-C #2105
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Data, the seminal opportunity and challenge in modern machine learning, currently constrains the scalability of representation learning and impedes the pace of model evolution.In this work, we investigate the efficiency properties of data from both optimization and generalization perspectives.Our theoretical and empirical analysis reveals an unexpected finding: for a given task, utilizing a publicly available, task- and architecture-agnostic model (referred to as the `prior model' in this paper) can effectively produce efficient data.Building on this insight, we propose the Representation Learning Accelerator (ReLA), which promotes the formation and utilization of efficient data, thereby accelerating representation learning.Utilizing a ResNet-18 pre-trained on CIFAR-10 as a prior model to inform ResNet-50 training on ImageNet-1K reduces computational costs by $50\%$ while maintaining the same accuracy as the model trained with the original BYOL, which requires $100\%$ cost.Our code is available at: \url{https://github.com/LINs-lab/ReLA}.

Live content is unavailable. Log in and register to view live content