Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2023: Optimization for Machine Learning

Adaptive Quasi-Newton and Anderson Acceleration Framework with Explicit Global (Accelerated) Convergence Rates

Damien Scieur


Abstract:

Despite the impressive numerical performance of quasi-Newton and Anderson/nonlinear-acceleration methods, their global convergence rates have remained elusive for over 50 years. This paper addresses this long-standing question by introducing a framework that derives novel and adaptive quasi-Newton or nonlinear/Anderson acceleration schemes. Under mild assumptions, the proposed iterative methods exhibit explicit, non-asymptotic convergence rates that blend those of gradient descent and Cubic Regularized Newton's method. The proposed approach also includes an accelerated version for convex functions. Notably, these rates are achieved adaptively, without prior knowledge of the function's smoothness parameter. The framework presented in this paper is generic, and algorithms such as Newton's method with random subspaces, finite difference, or lazy Hessian can be seen as special cases of this paper's algorithm. Numerical experiments demonstrate the efficiency of the proposed framework, even compared to the L-BFGS algorithm with Wolfe line search.

Chat is not available.