Timezone: »
Poster
Faster Neural Network Training with Approximate Tensor Operations
Menachem Adelman · Kfir Levy · Ido Hakimi · Mark Silberstein
We propose a novel technique for faster deep neural network training which systematically applies sample-based approximation to the constituent tensor operations, i.e., matrix multiplications and convolutions. We introduce new sampling techniques, study their theoretical properties, and prove that they provide the same convergence guarantees when applied to SGD training. We apply approximate tensor operations to single and multi-node training of MLP and CNN networks on MNIST, CIFAR-10 and ImageNet datasets. We demonstrate up to 66% reduction in the amount of computations and communication, and up to 1.37x faster training time while maintaining negligible or no impact on the final test accuracy.
Author Information
Menachem Adelman (Intel, Technion)
Kfir Levy (Technion)
Ido Hakimi (Technion – Israel Institute of Technology)
Mark Silberstein (Technion)
More from the Same Authors
-
2021 Poster: STORM+: Fully Adaptive SGD with Recursive Momentum for Nonconvex Optimization »
Kfir Levy · Ali Kavis · Volkan Cevher -
2017 Poster: Online to Offline Conversions, Universality and Adaptive Minibatch Sizes »
Kfir Levy -
2017 Poster: Non-monotone Continuous DR-submodular Maximization: Structure and Algorithms »
Yatao Bian · Kfir Levy · Andreas Krause · Joachim M Buhmann