Skip to yearly menu bar Skip to main content


Spotlight Poster

Data curation via joint example selection further accelerates multimodal learning

Talfan Evans · Nikhil Parthasarathy · Hamza Merzic · Olivier Henaff

East Exhibit Hall A-C #1801
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Data curation is an essential component of large-scale pretraining. In this work, we demonstrate that jointly prioritizing batches of data is more effective for learning than selecting examples independently. Multimodal contrastive objectives expose the dependencies between data and thus naturally yield criteria for measuring the joint learnability of a batch. We derive a simple and tractable algorithm for selecting such batches, which significantly accelerate training beyond individually-prioritized data points. As performance improves by selecting from large super-batches, we also leverage recent advances in model approximation to reduce the computational overhead of scoring. As a result, our approach—multimodal contrastive learning with joint example selection (JEST)—surpasses state-of-the-art pretraining methods with up to 13$\times$ fewer iterations and 10$\times$ less computation. Essential to the performance of JEST is the ability to steer the data selection process towards the distribution of smaller, well-curated datasets via pretrained reference models, exposing data curation as a new dimension for neural scaling laws.

Live content is unavailable. Log in and register to view live content