Timezone: »

Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak · Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #806
There is mounting evidence of emergent phenomena in the capabilities of deep learning methods as we scale up datasets, model sizes, and training times. While there are some accounts of how these resources modulate statistical capacity, far less is known about their effect on the computational problem of model training. This work conducts such an exploration through the lens of learning a $k$-sparse parity of $n$ bits, a canonical discrete search problem which is statistically easy but computationally hard. Empirically, we find that a variety of neural networks successfully learn sparse parities, with discontinuous phase transitions in the training curves. On small instances, learning abruptly occurs at approximately $n^{O(k)}$ iterations; this nearly matches SQ lower bounds, despite the apparent lack of a sparse prior. Our theoretical analysis shows that these observations are not explained by a Langevin-like mechanism, whereby SGD "stumbles in the dark" until it finds the hidden set of features (a natural algorithm which also runs in $n^{O(k)}$ time). Instead, we show that SGD gradually amplifies the sparse solution via a Fourier gap in the population gradient, making continual progress that is invisible to loss and error metrics.

Author Information

Boaz Barak (Harvard University)
Benjamin Edelman (Harvard University)
Surbhi Goel (Microsoft Research NYC)
Sham Kakade (Harvard University & Amazon)
Eran Malach (Hebrew University Jerusalem Israel)
Cyril Zhang (Microsoft Research NYC)

More from the Same Authors