Skip to yearly menu bar Skip to main content


Poster

Which Neural Net Architectures Give Rise to Exploding and Vanishing Gradients?

Boris Hanin

Room 210 #50

Keywords: [ Optimization for Deep Networks ] [ Learning Theory ]


Abstract:

We give a rigorous analysis of the statistical behavior of gradients in a randomly initialized fully connected network N with ReLU activations. Our results show that the empirical variance of the squares of the entries in the input-output Jacobian of N is exponential in a simple architecture-dependent constant beta, given by the sum of the reciprocals of the hidden layer widths. When beta is large, the gradients computed by N at initialization vary wildly. Our approach complements the mean field theory analysis of random networks. From this point of view, we rigorously compute finite width corrections to the statistics of gradients at the edge of chaos.

Live content is unavailable. Log in and register to view live content