Timezone: »
Despite the predominant use of first-order methods for training deep learning models, second-order methods, and in particular, natural gradient methods, remain of interest because of their potential for accelerating training through the use of curvature information. Several methods with non-diagonal preconditioning matrices, including KFAC, Shampoo, and K-BFGS, have been proposed and shown to be effective. Based on the so-called tensor normal (TN) distribution, we propose and analyze a brand new approximate natural gradient method, Tensor Normal Training (TNT), which like Shampoo, only requires knowledge of the shape of the training parameters. By approximating the probabilistically based Fisher matrix, as opposed to the empirical Fisher matrix, our method uses the block-wise covariance of the sampling based gradient as the pre-conditioning matrix. Moreover, the assumption that the sampling-based (tensor) gradient follows a TN distribution, ensures that its covariance has a Kronecker separable structure, which leads to a tractable approximation to the Fisher matrix. Consequently, TNT's memory requirements and per-iteration computational costs are only slightly higher than those for first-order methods. In our experiments, TNT exhibited superior optimization performance to state-of-the-art first-order methods, and comparable optimization performance to the state-of-the-art second-order methods KFAC and Shampoo. Moreover, TNT demonstrated its ability to generalize as well as first-order methods, while using fewer epochs.
Author Information
Yi Ren (Columbia University)
Donald Goldfarb (Columbia University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Tensor Normal Training for Deep Learning Models »
Dates n/a. Room
More from the Same Authors
-
2022 : Efficient Second-Order Stochastic Methods for Machine Learning »
Donald Goldfarb -
2020 : Invited speaker: Practical Kronecker-factored BFGS and L-BFGS methods for training deep neural networks, Donald Goldfarb »
Donald Goldfarb -
2020 Poster: Practical Quasi-Newton Methods for Training Deep Neural Networks »
Donald Goldfarb · Yi Ren · Achraf Bahamou -
2020 Spotlight: Practical Quasi-Newton Methods for Training Deep Neural Networks »
Donald Goldfarb · Yi Ren · Achraf Bahamou -
2019 : Economical use of second-order information in training machine learning models »
Donald Goldfarb -
2019 Poster: Leader Stochastic Gradient Descent for Distributed Training of Deep Learning Models »
Yunfei Teng · Wenbo Gao · François Chalus · Anna Choromanska · Donald Goldfarb · Adrian Weller -
2010 Poster: Sparse Inverse Covariance Selection via Alternating Linearization Methods »
Katya Scheinberg · Shiqian Ma · Donald Goldfarb