Timezone: »
Poster
A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions
Damek Davis · Dmitriy Drusvyatskiy · Yin Tat Lee · Swati Padmanabhan · Guanghao Ye
Zhang et al. (ICML 2020) introduced a novel modification of Goldstein's classical subgradient method, with an efficiency guarantee of $O(\varepsilon^{-4})$ for minimizing Lipschitz functions. Their work, however, makes use of an oracle that is not efficiently implementable. In this paper, we obtain the same efficiency guarantee with a standard subgradient oracle, thus making our algorithm efficiently implementable. Our resulting method works on any Lipschitz function whose value and gradient can be evaluated at points of differentiability. We additionally present a new cutting plane algorithm that achieves an efficiency of $O(d\varepsilon^{-2}\log S)$ for the class of $S$-smooth (and possibly non-convex) functions in low dimensions. Strikingly, this $\epsilon$-dependence matches the lower bounds for the convex setting.
Author Information
Damek Davis (Cornell University)
Damek Davis is an Associate Professor of Operations Research at Cornell University. His research focuses on the interplay of optimization, signal processing, statistics, and machine learning. He has received several awards for his work, including a Sloan Research Fellowship in Mathematics (2020), the INFORMS Optimization Society Young Researchers Prize (2019), and an NSF CAREER Award (2021).
Dmitriy Drusvyatskiy (University of Washington)
Yin Tat Lee (UW)
Swati Padmanabhan (University of Washington, Seattle)
Guanghao Ye (Massachusetts Institute of Technology)
I am a second-year PhD student at MIT Math.
More from the Same Authors
-
2021 Spotlight: Numerical Composition of Differential Privacy »
Sivakanth Gopi · Yin Tat Lee · Lukas Wutschitz -
2021 Spotlight: Private Non-smooth ERM and SCO in Subquadratic Steps »
Janardhan Kulkarni · Yin Tat Lee · Daogao Liu -
2023 Poster: Computing Approximate $\ell_p$ Sensitivities »
Swati Padmanabhan · David Woodruff · Richard Zhang -
2023 Poster: When does over-parameterized SGD take as few iterations to converge as gradient descent? »
Chaoyue Liu · Dmitriy Drusvyatskiy · Yian Ma · Damek Davis · Misha Belkin -
2023 Poster: Learning threshold neurons via edge of stability »
Kwangjun Ahn · Sebastien Bubeck · Sinho Chewi · Yin Tat Lee · Felipe Suarez · Yi Zhang -
2023 : Aiming towards the minimizers: fast convergence of SGD for overparameterized problems, Dmitriy Drusvyatskiy »
Dmitriy Drusvyatskiy -
2022 Panel: Panel 1A-3: A gradient sampling… & Local Bayesian optimization… »
Swati Padmanabhan · Quan Nguyen -
2022 Poster: A Fast Scale-Invariant Algorithm for Non-negative Least Squares with Non-negative Data »
Jelena Diakonikolas · Chenghui Li · Swati Padmanabhan · Chaobing Song -
2022 Poster: Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient Oracle Complexity »
Sally Dong · Haotian Jiang · Yin Tat Lee · Swati Padmanabhan · Guanghao Ye -
2021 Poster: Private Non-smooth ERM and SCO in Subquadratic Steps »
Janardhan Kulkarni · Yin Tat Lee · Daogao Liu -
2021 Poster: Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions »
Yin Tat Lee · Ruoqi Shen · Kevin Tian -
2021 Poster: Fast and Memory Efficient Differentially Private-SGD via JL Projections »
Zhiqi Bu · Sivakanth Gopi · Janardhan Kulkarni · Yin Tat Lee · Judy Hanwen Shen · Uthaipon Tantipongpipat -
2021 Poster: Numerical Composition of Differential Privacy »
Sivakanth Gopi · Yin Tat Lee · Lukas Wutschitz -
2021 Oral: Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions »
Yin Tat Lee · Ruoqi Shen · Kevin Tian -
2020 Poster: Acceleration with a Ball Optimization Oracle »
Yair Carmon · Arun Jambulapati · Qijia Jiang · Yujia Jin · Yin Tat Lee · Aaron Sidford · Kevin Tian -
2020 Oral: Acceleration with a Ball Optimization Oracle »
Yair Carmon · Arun Jambulapati · Qijia Jiang · Yujia Jin · Yin Tat Lee · Aaron Sidford · Kevin Tian -
2020 Poster: Robust Gaussian Covariance Estimation in Nearly-Matrix Multiplication Time »
Jerry Li · Guanghao Ye -
2020 Poster: Network size and size of the weights in memorization with two-layers neural networks »
Sebastien Bubeck · Ronen Eldan · Yin Tat Lee · Dan Mikulincer -
2019 Poster: The Randomized Midpoint Method for Log-Concave Sampling »
Ruoqi Shen · Yin Tat Lee -
2019 Spotlight: The Randomized Midpoint Method for Log-Concave Sampling »
Ruoqi Shen · Yin Tat Lee -
2018 Poster: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee -
2018 Oral: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Laurent Massoulié · Yin Tat Lee