Timezone: »
Most popular optimizers for deep learning can be broadly categorized as adaptive methods (e.g.~Adam) and accelerated schemes (e.g.~stochastic gradient descent (SGD) with momentum). For many models such as convolutional neural networks (CNNs), adaptive methods typically converge faster but generalize worse compared to SGD; for complex settings such as generative adversarial networks (GANs), adaptive methods are typically the default because of their stability. We propose AdaBelief to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability. The intuition for AdaBelief is to adapt the stepsize according to the "belief" in the current gradient direction. Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step. We validate AdaBelief in extensive experiments, showing that it outperforms other methods with fast convergence and high accuracy on image classification and language modeling. Specifically, on ImageNet, AdaBelief achieves comparable accuracy to SGD. Furthermore, in the training of a GAN on Cifar10, AdaBelief demonstrates high stability and improves the quality of generated samples compared to a well-tuned Adam optimizer. Code is available athttps://github.com/juntang-zhuang/Adabelief-Optimizer
Author Information
Juntang Zhuang (Yale University)
Tommy Tang (University of Illinois Urbana-Champaign)
Yifan Ding (University of Central Florida)
Sekhar C Tatikonda (Yale University)
Nicha Dvornek (Yale University)
Xenophon Papademetris (Yale University)
James Duncan (Yale University)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients »
Fri. Dec 11th 05:00 -- 07:00 AM Room Poster Session 6 #1864
More from the Same Authors
-
2023 Poster: Rethinking Semi-Supervised Medical Image Segmentation: A Variance-Reduction Perspective »
Chenyu You · Weicheng Dai · Yifei Min · Fenglin Liu · David Clifton · S. Kevin Zhou · Lawrence Staib · James Duncan -
2022 : Session 2 Keynote 2 »
James Duncan -
2022 Poster: Class-Aware Adversarial Transformers for Medical Image Segmentation »
Chenyu You · Ruihan Zhao · Fenglin Liu · Siyuan Dong · Sandeep Chinchali · Ufuk Topcu · Lawrence Staib · James Duncan -
2021 Poster: Momentum Centering and Asynchronous Update for Adaptive Gradient Methods »
Juntang Zhuang · Yifan Ding · Tommy Tang · Nicha Dvornek · Sekhar C Tatikonda · James Duncan -
2017 Poster: Accelerated consensus via Min-Sum Splitting »
Patrick Rebeschini · Sekhar C Tatikonda -
2014 Poster: Testing Unfaithful Gaussian Graphical Models »
De Wen Soh · Sekhar C Tatikonda