Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2021: Optimization for Machine Learning

Random-reshuffled SARAH does not need a full gradient computations

Aleksandr Beznosikov · Martin Takac


Abstract:

The StochAstic Recursive grAdient algoritHm (SARAH) algorithm is a variance reduced variant of the Stochastic Gradient Descent (SGD) algorithm that needs a gradient of the objective function from time to time. In this paper, we remove the necessity of a full gradient computation. This is achieved by using a randomized reshuffling strategy and aggregating stochastic gradients obtained in each epoch. The aggregated stochastic gradients serve as an estimate of a full gradient in the SARAH algorithm. We provide a theoretical analysis of the proposed approach and conclude the paper with numerical experiments that demonstrate the efficiency of this approach.

Chat is not available.