Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Tue Dec 10 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #150
Can SGD Learn Recurrent Neural Networks with Provable Generalization?
Zeyuan Allen-Zhu · Yuanzhi Li
[ Paper [ Poster

Recurrent Neural Networks (RNNs) are among the most popular models in sequential data analysis. Yet, in the foundational PAC learning language, what concept class can it learn? Moreover, how can the same recurrent unit simultaneously learn functions from different input tokens to different output tokens, without affecting each other? Existing generalization bounds for RNN scale exponentially with the input length, significantly limiting their practical implications.

In this paper, we show using the vanilla stochastic gradient descent (SGD), RNN can actually learn some notable concept class \emph{efficiently}, meaning that both time and sample complexity scale \emph{polynomially} in the input length (or almost polynomially, depending on the concept). This concept class at least includes functions where each output token is generated from inputs of earlier tokens using a smooth two-layer neural network.