Timezone: »
In training deep neural network (DNN) models, computing and storing a full BFGS approximation or storing a modest number of (step, change in gradient) vector pairs for use in an L-BFGS implementation is impractical. In our methods, we approximate the Hessian by a block-diagonal matrix and use the structure of the gradient and Hessian to further approximate these blocks, each of which corresponds to a layer, as the Kronecker product of two much smaller matrices, analogous to the approach in KFAC for approximating the Fisher matrix in a stochastic natural gradient method. Because of the indefinite and highly variable nature of the Hessian in a DNN, we also propose a new damping approach to keep the BFGS and L-BFGS approximations bounded, both above and below. In tests on autoencoder feed forward and convolutional neural network models, our methods outperformed KFAC and were competitive with state-of-the-art first-order stochastic methods.
Author Information
Donald Goldfarb (Columbia University)
More from the Same Authors
-
2021 Spotlight: Tensor Normal Training for Deep Learning Models »
Yi Ren · Donald Goldfarb -
2022 : Efficient Second-Order Stochastic Methods for Machine Learning »
Donald Goldfarb -
2021 Poster: Tensor Normal Training for Deep Learning Models »
Yi Ren · Donald Goldfarb -
2020 Poster: Practical Quasi-Newton Methods for Training Deep Neural Networks »
Donald Goldfarb · Yi Ren · Achraf Bahamou -
2020 Spotlight: Practical Quasi-Newton Methods for Training Deep Neural Networks »
Donald Goldfarb · Yi Ren · Achraf Bahamou -
2019 : Economical use of second-order information in training machine learning models »
Donald Goldfarb -
2019 Poster: Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models »
Yunfei Teng · Wenbo Gao · François Chalus · Anna Choromanska · Donald Goldfarb · Adrian Weller -
2010 Poster: Sparse Inverse Covariance Selection via Alternating Linearization Methods »
Katya Scheinberg · Shiqian Ma · Donald Goldfarb