Timezone: »

Revisiting Neural Scaling Laws in Language and Vision
Ibrahim Alabdulmohsin · Behnam Neyshabur · Xiaohua Zhai

Tue Nov 29 02:30 PM -- 04:00 PM (PST) @ Hall J #605

The remarkable progress in deep learning in recent years is largely driven by improvements in scale, where bigger models are trained on larger datasets for longer schedules. To predict the benefit of scale empirically, we argue for a more rigorous methodology based on the extrapolation loss, instead of reporting the best-fitting (interpolating) parameters. We then present a recipe for estimating scaling law parameters reliably from learning curves. We demonstrate that it extrapolates more accurately than previous methods in a wide range of architecture families across several domains, including image classification, neural machine translation (NMT) and language modeling, in addition to tasks from the BIG-Bench evaluation benchmark. Finally, we release a benchmark dataset comprising of 90 evaluation tasks to facilitate research in this domain.

Author Information

Ibrahim Alabdulmohsin (Google)
Behnam Neyshabur (Google)
Xiaohua Zhai (Google Brain)

More from the Same Authors