Skip to yearly menu bar Skip to main content


Poster

Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks

Spencer Frei · Yuan Cao · Quanquan Gu

East Exhibition Hall B + C #195

Keywords: [ Theory ] [ Learning Theory ] [ Deep Learning ]


Abstract:

The skip-connections used in residual networks have become a standard architecture choice in deep learning due to the increased generalization and stability of networks with this architecture, although there have been limited theoretical guarantees for this improved performance. In this work, we analyze overparameterized deep residual networks trained by gradient descent following random initialization, and demonstrate that (i) the class of networks learned by gradient descent constitutes a small subset of the entire neural network function class, and (ii) this subclass of networks is sufficiently large to guarantee small training error. By showing (i) we are able to demonstrate that deep residual networks trained with gradient descent have a small generalization gap between training and test error, and together with (ii) this guarantees that the test error will be small. Our optimization and generalization guarantees require overparameterization that is only logarithmic in the depth of the network, which helps explain why residual networks are preferable to fully connected ones.

Live content is unavailable. Log in and register to view live content