Timezone: »
Poster
GraB: Finding Provably Better Data Permutations than Random Reshuffling
Yucheng Lu · Wentao Guo · Christopher De Sa
Random reshuffling, which randomly permutes the dataset each epoch, is widely adopted in model training because it yields faster convergence than with-replacement sampling. Recent studies indicate greedily chosen data orderings can further speed up convergence empirically, at the cost of using more computation and memory. However, greedy ordering lacks theoretical justification and has limited utility due to its non-trivial memory and computation overhead. In this paper, we first formulate an example-ordering framework named \emph{herding} and answer affirmatively that SGD with herding converges at the rate $O(T^{-2/3})$ on smooth, non-convex objectives, faster than the $O(n^{1/3}T^{-2/3})$ obtained by random reshuffling, where $n$ denotes the number of data points and $T$ denotes the total number of iterations. To reduce the memory overhead, we leverage discrepancy minimization theory to propose an online Gradient Balancing algorithm (GraB) that enjoys the same rate as herding, while reducing the memory usage from $O(nd)$ to just $O(d)$ and computation from $O(n^2)$ to $O(n)$, where $d$ denotes the model dimension. We show empirically on applications including MNIST, CIFAR10, WikiText and GLUE that GraB can outperform random reshuffling in terms of both training and validation performance, and even outperform state-of-the-art greedy ordering while reducing memory usage over $100\times$.
Author Information
Yucheng Lu (Cornell University)
Wentao Guo (Cornell University)
Christopher De Sa (Cornell University)
More from the Same Authors
-
2022 Poster: Model Preserving Compression for Neural Networks »
Jerry Chee · Megan Flynn (née Renz) · Anil Damle · Christopher De Sa -
2022 Poster: Understanding Hyperdimensional Computing for Parallel Single-Pass Learning »
Tao Yu · Yichi Zhang · Zhiru Zhang · Christopher De Sa -
2022 Poster: From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent »
Christopher De Sa · Satyen Kale · Jason Lee · Ayush Sekhari · Karthik Sridharan -
2021 Poster: Hyperparameter Optimization Is Deceiving Us, and How to Stop It »
A. Feder Cooper · Yucheng Lu · Jessica Forde · Christopher De Sa