Skip to yearly menu bar Skip to main content


Poster

Can SGD Learn Recurrent Neural Networks with Provable Generalization?

Zeyuan Allen-Zhu · Yuanzhi Li

East Exhibition Hall B + C #150

Keywords: [ Theory ] [ Learning Theory ]


Abstract:

Recurrent Neural Networks (RNNs) are among the most popular models in sequential data analysis. Yet, in the foundational PAC learning language, what concept class can it learn? Moreover, how can the same recurrent unit simultaneously learn functions from different input tokens to different output tokens, without affecting each other? Existing generalization bounds for RNN scale exponentially with the input length, significantly limiting their practical implications.

In this paper, we show using the vanilla stochastic gradient descent (SGD), RNN can actually learn some notable concept class \emph{efficiently}, meaning that both time and sample complexity scale \emph{polynomially} in the input length (or almost polynomially, depending on the concept). This concept class at least includes functions where each output token is generated from inputs of earlier tokens using a smooth two-layer neural network.

Live content is unavailable. Log in and register to view live content