Timezone: »
Adaptive stochastic gradient methods such as AdaGrad have gained popularity in particular for training deep neural networks. The most commonly used and studied variant maintains a diagonal matrix approximation to second order information by accumulating past gradients which are used to tune the step size adaptively. In certain situations the full-matrix variant of AdaGrad is expected to attain better performance, however in high dimensions it is computationally impractical. We present Ada-LR and RadaGrad two computationally efficient approximations to full-matrix AdaGrad based on randomized dimensionality reduction. They are able to capture dependencies between features and achieve similar performance to full-matrix AdaGrad but at a much smaller computational cost. We show that the regret of Ada-LR is close to the regret of full-matrix AdaGrad which can have an up-to exponentially smaller dependence on the dimension than the diagonal variant. Empirically, we show that Ada-LR and RadaGrad perform similarly to full-matrix AdaGrad. On the task of training convolutional neural networks as well as recurrent neural networks, RadaGrad achieves faster convergence than diagonal AdaGrad.
Author Information
Gabriel Krummenacher (ETH Zurich)
Brian McWilliams (Disney Research)
Yannic Kilcher (ETH Zurich)
Joachim M Buhmann (ETH Zurich)
Nicolai Meinshausen (ETH Zurich)
More from the Same Authors
-
2020 Poster: Adversarial Training is a Form of Data-dependent Operator Norm Regularization »
Kevin Roth · Yannic Kilcher · Thomas Hofmann -
2020 Spotlight: Adversarial Training is a Form of Data-dependent Operator Norm Regularization »
Kevin Roth · Yannic Kilcher · Thomas Hofmann -
2017 Poster: Efficient and Flexible Inference for Stochastic Systems »
Stefan Bauer · Nico S Gorbach · Djordje Miladinovic · Joachim M Buhmann -
2017 Poster: Non-monotone Continuous DR-submodular Maximization: Structure and Algorithms »
Yatao Bian · Kfir Levy · Andreas Krause · Joachim M Buhmann -
2017 Poster: Scalable Variational Inference for Dynamical Systems »
Nico S Gorbach · Stefan Bauer · Joachim M Buhmann -
2015 Poster: Variance Reduced Stochastic Gradient Descent with Neighbors »
Thomas Hofmann · Aurelien Lucchi · Simon Lacoste-Julien · Brian McWilliams -
2015 Poster: BACKSHIFT: Learning causal cyclic graphs from unknown shift interventions »
Dominik Rothenhäusler · Christina Heinze-Deml · Jonas Peters · Nicolai Meinshausen -
2014 Poster: Fast and Robust Least Squares Estimation in Corrupted Linear Models »
Brian McWilliams · Gabriel Krummenacher · Mario Lucic · Joachim M Buhmann -
2014 Spotlight: Fast and Robust Least Squares Estimation in Corrupted Linear Models »
Brian McWilliams · Gabriel Krummenacher · Mario Lucic · Joachim M Buhmann -
2013 Poster: Correlated random features for fast semi-supervised learning »
Brian McWilliams · David Balduzzi · Joachim M Buhmann -
2011 Workshop: Philosophy and Machine Learning »
Marcello Pelillo · Joachim M Buhmann · Tiberio Caetano · Bernhard Schölkopf · Larry Wasserman -
2006 Poster: Denoising and Dimension Reduction in Feature Space »
Mikio L Braun · Joachim M Buhmann · Klaus-Robert Müller