Skip to yearly menu bar Skip to main content


Poster

Large Scale Structure of Neural Network Loss Landscapes

Stanislav Fort · Stanislaw Jastrzebski

East Exhibition Hall B + C #91

Keywords: [ Visualization or Exposition Techniques for Deep Networks ] [ Deep Learning -> Optimization for Deep Networks; Deep Learning ] [ Deep Learning ]


Abstract: There are many surprising and perhaps counter-intuitive properties of optimization of deep neural networks. We propose and experimentally verify a unified phenomenological model of the loss landscape that incorporates many of them. High dimensionality plays a key role in our model. Our core idea is to model the loss landscape as a set of high dimensional \emph{wedges} that together form a large-scale, inter-connected structure and towards which optimization is drawn. We first show that hyperparameter choices such as learning rate, network width and $L_2$ regularization, affect the path optimizer takes through the landscape in similar ways, influencing the large scale curvature of the regions the optimizer explores. Finally, we predict and demonstrate new counter-intuitive properties of the loss-landscape. We show an existence of low loss subspaces connecting a set (not only a pair) of solutions, and verify it experimentally. Finally, we analyze recently popular ensembling techniques for deep networks in the light of our model.

Live content is unavailable. Log in and register to view live content