Skip to yearly menu bar Skip to main content


Poster

Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits

Arya Akhavan · Massimiliano Pontil · Alexandre Tsybakov

Poster Session 5 #1393

Keywords: [ Privacy, Anonymity, and Security ] [ Applications ] [ Reinforcement Learning and Planning ] [ Reinforcement Learning ]


Abstract:

We address the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its function values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the online regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements.

Chat is not available.