Timezone: »

 
Poster
Approaching Quartic Convergence Rates for Quasi-Stochastic Approximation with Application to Gradient-Free Optimization
Caio Kalil Lauand · Sean Meyn

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #936
Stochastic approximation is a foundation for many algorithms found in machine learning and optimization. It is in general slow to converge: the mean square error vanishes as $O(n^{-1})$. A deterministic counterpart known as quasi-stochastic approximation is a viable alternative in many applications, including gradient-free optimization and reinforcement learning. It was assumed in prior research that the optimal achievable convergence rate is $O(n^{-2})$. It is shown in this paper that through design it is possible to obtain far faster convergence, of order $O(n^{-4+\delta})$, with $\delta>0$ arbitrary. Two techniques are introduced for the first time to achieve this rate of convergence. The theory is also specialized within the context of gradient-free optimization, and tested on standard benchmarks. The main results are based on a combination of novel application of results from number theory and techniques adapted from stochastic approximation theory.

Author Information

Caio Kalil Lauand (University of Florida)

Caio Kalil Lauand (caio.kalillauand@ufl.edu) received the B.S.E.E. degree from the University of North Florida. He is a Ph.D. student in the University of Florida under the supervision of Prof. Sean Meyn. His focus is on stochastic approximation and applications such as optimization and reinforcement learning.

Sean Meyn (University of Florida)

More from the Same Authors